As in C/C++, we can print the memory content of a variable as below:
double d = 234.5;
unsigned char *p = (unsigned char *)&d;
size_t i;
for (i=0; i < sizeof d; ++i)
printf("%02x\n", p[i]);
Yes, I know we can use pickle.dump() to serialize a object, but it seems generated a lot redundant things..
How can we achieve this in python in a pure way?
The internal memory representation of a Python object cannot be reached from the Python code logical level and you'd need to write a C extension.
If you're designing your own serialization protocol then may be the struct module is what you're looking for. It allows converting from Python values to binary data and back in the format you specify. For example
import struct
print(list(struct.pack('d', 3.14)))
will display [31, 133, 235, 81, 184, 30, 9, 64] because those are the byte values for the double precision representation of 3.14.
NOTE: struct.pack returns a bytes object in Python 3.x but an str object in Python 2.x. To see the numeric code of the bytes in Python 2.x you need to use print map(ord, struct.pack(...)) instead.
You can not do this in pure python. But you could write a Python extension module in C that does exactly what you ask for. But it would probably will not be very useful. You can read more about extension modules here
I assume that by Python you mean C-Python, and not PyPy, Jython or IronPython.
Related
During a Cython meetup a speaker pointed to other data types such as cython.ssize_t. The type ssize_t is briefly mentioned in this Wikipedia article however it is not well explained. Similarly Cython documentation mentions types in terms of how types are automatically converted.
What are all the data types available in Cython and what are their specifications?
You have basically access to most of the C types:
Here are the equivalent of all the Python types (if I do not miss some), taken from Oreilly book cython book
Python bool:
bint (boolean coded on 4 bits, alias for short)
Python int and long
[unsigned] char
[unsigned] short
[unsigned] int
[unsigned] long
[unsigned] long long
Python float
float
double
long double
Python complex
float complex
double complex
Python bytes / str / unicode
char *
std::string
For the size_t and Py_ssite_t, keep in mind these are aliases.
Py_ssize_t is defined in python.h imported implicitly in cython. That can hold the size (in bytes) of the largest object the Python interpreter ever creates.
While size_t is a standard C89 type, defined in <stddef.h>.
I have a unicode string f. I want to memset it to 0. print f should display null (\0)
I am using ctypes.memset to achieve this -
> >>> f
> u'abc'
> >>> print ("%s" % type(f))
> <type 'unicode'>
> >>> import ctypes
> **>>> ctypes.memset(id(f)+50,0,6)**
> **4363962530
> >>> f
> u'abc'
> >>> print f
> abc**
Why did the memory location not get memset in case of unicode string?
It works perfectly for an str object.
Thanks for help.
First, this is almost certainly a very bad idea. Python expects strings to be immutable. There's a reason that even the C API won't let you change their contents after they're flagged ready. If you're just doing this to play around with the interpreter's implementation, that can be fun and instructive, but if you're doing it for any real-life purpose, you're probably doing something wrong.
In particular, if you're doing it for "security", what you almost certainly really want to do is to not create a unicode in the first place, but instead create, say, a bytearray with the UTF-16 or UTF-32 encoding of your string, which can be zeroed out in a way that's safe, portable, and a lot easier.
Anyway, there's no reason to expect that two completely different types should store their buffers at the same offset.
In CPython 2.x, a str is a PyStringObject:
typedef struct {
PyObject_VAR_HEAD
long ob_shash;
int ob_sstate;
char ob_sval[1];
} PyStringObject;
That ob_sval is the buffer; the offset should be 36 on 64-bit builds and (I think) 24 on 32-bit builds.
In a comment, you say:
I read it somewhere and also the offset for a string type is 37 in my system which is what sys.getsizeof('') shows -> >>> sys.getsizeof('') 37
The offset for a string buffer is actually 36, not 37. And the fact that it's even that close is just a coincidence of the way str is implemented. (Hopefully you can understand why by looking at the struct definition—if not, you definitely shouldn't be writing code like this.) There's no reason to expect the same trick to work for some other type without looking at its implementation.
A unicode is a PyUnicodeObject:
typedef struct {
PyObject_HEAD
Py_ssize_t length; /* Length of raw Unicode data in buffer */
Py_UNICODE *str; /* Raw Unicode buffer */
long hash; /* Hash value; -1 if not set */
PyObject *defenc; /* (Default) Encoded version as Python
string, or NULL; this is used for
implementing the buffer protocol */
} PyUnicodeObject;
Its buffer is not even inside the object itself; that str member is a pointer to the buffer (which is not guaranteed to be right after the struct). Its offset should be 24 on 64-bit builds, and (I think) 20 on 32-bit builds. So, to do the equivalent, you'd need to read the pointer there, then follow it to find the location to memset.
If you're using a narrow-Unicode build, it should look like this:
>>> ctypes.POINTER(ctypes.c_uint16 * len(g)).from_address(id(g)+24).contents[:]
[97, 98, 99]
That's the ctypes translation of finding (uint16_t *)(((char *)g)+24) and reading the array that starts at *that and ends at *(that+len(g)), which is what you'd have to do if you were writing C code and didn't have access to the unicodeobject.h header.
(In the the test I just quoted, g is at 0x10a598090, while its src points to 0x10a3b09e0, so the buffer is not immediately after the struct, or anywhere near it; it's about 2MB before it.)
For a wide-Unicode build, the same thing with c_uint32.
So, that should show you what you want to memset.
And you should also see a serious implication for your attempt at "security" here. (If I have to point it out, that's yet another indication that you should not be writing this code.)
Say I have the following code in C++:
union {
int32_t i;
uint32_t ui;
};
i = SomeFunc();
std::string test(std::to_string(ui));
std::ofstream outFile(test);
And say I had the value of i somehow in Python, how would I be able to get the name of the file?
For those of you that are unfamiliar with C++. What I am doing here is writing some value in signed 32-bit integer format to i and then interpreting the bitwise representation as an unsigned 32-bit integer in ui. I am taking the same 32 bits and interpreting them in two different ways.
How can I do this in Python? There does not seem to be any explicit type specification in Python, so how can I reinterpret some set of bits in a different way?
EDIT: I am using Python 2.7.12
I would use python struct for interpreting bits in different ways.
something like following to print -12 as unsigned integer
import struct
p = struct.pack("#i", -12)
print("{}".format(struct.unpack("#I",p)[0]))
It seems that the code will crash when I do extract<const char*>("a unicode string")
Anyone know how to solve this?
This compiles and works for me, with your example string and using Python 2.x:
void process_unicode(boost::python::object u) {
using namespace boost::python;
const char* value = extract<const char*>(str(u).encode("utf-8"));
std::cout << "The string value is '"<< value << "'" << std::endl;
}
You can write a specific from-python converter, if you wish to auto-convert PyUnicode (#Python2.x) to const wchar_t* or to a type from ICU (that seems to be the common recommendation for dealing with Unicode on C++).
If you want full support to unicode characters which are not in the ASCII range (for example, accented characters such as á, ç or ï, you will need to write the from-python converter. Note this will have to be done separately for Python 2.x and 3.x, if you wish to support both. For Python 3.x, the PyUnicode type was deprecated and now the string type works as PyUnicode used to for Python 2.x. Nothing that a couple of #if PY_VERSION_HEX >= 0x03000000 cannot handle.
[edit]
The above comment was wrong. Note that, since Python 3.x treats unicode strings as normal strings, boost::python will wrap that into boost::python::str objects. I have not verified how those are handled w.r.t. unicode translation in this case.
Have you tried
extract<std::string>("a unicode string").c_str()
or
extract<wchar_t*>(...)
I often have to write code in other languages that interact with C structs. Most typically this involves writing Python code with the struct or ctypes modules.
So I'll have a .h file full of struct definitions, and I have to manually read through them and duplicate those definitions in my Python code. This is time consuming and error-prone, and it's difficult to keep the two definitions in sync when they change frequently.
Is there some tool or library in any language (doesn't have to be C or Python) which can take a .h file and produce a structured list of its structs and their fields? I'd love to be able to write a script to generate my automatically generate my struct definitions in Python, and I don't want to have to process arbitrary C code to do it. Regular expressions would work great about 90% of the time and then cause endless headaches for the remaining 10%.
If you compile your C code with debugging (-g), pahole (git) can give you the exact structure layouts being used.
$ pahole /bin/dd
…
struct option {
const char * name; /* 0 8 */
int has_arg; /* 8 4 */
/* XXX 4 bytes hole, try to pack */
int * flag; /* 16 8 */
int val; /* 24 4 */
/* size: 32, cachelines: 1, members: 4 */
/* sum members: 24, holes: 1, sum holes: 4 */
/* padding: 4 */
/* last cacheline: 32 bytes */
};
…
This should be quite a lot nicer to parse than straight C.
Regular expressions would work great about 90% of the time and then cause endless headaches for the remaining 10%.
The headaches happen in the cases where the C code contains syntax that you didn't think of when writing your regular expressions. Then you go back and realise that C can't really be parsed by regular expressions, and life becomes not fun.
Try turning it around: define your own simple format, which allows less tricks than C does, and generate both the C header file and the Python interface code from your file:
define socketopts
int16 port
int32 ipv4address
int32 flags
Then you can easily write some Python to convert this to:
typedef struct {
short port;
int ipv4address;
int flags;
} socketopts;
and also to emit a Python class which uses struct to pack/unpack three values (possibly two of them big-endian and the other native-endian, up to you).
Have a look at Swig or SIP that would generate interface code for you or use ctypes.
Have you looked at Swig?
I have quite successfully used GCCXML on fairly large projects. You get an XML representation of the C code (including structures) which you can post-process with some simple Python.
ctypes-codegen or ctypeslib (same thing, I think) will generate ctypes Structure definitions (also other things, I believe, but I only tried structs) by parsing header files using GCCXML. It's no longer supported, but will likely work in some cases.
One my friend for this tasks done C-parser which he use with cog.