How to convert int to little endian hex with "\x" prefix? - python

I want to convert a decimal integer into a \xLO\xHI hex string while keeping the "\x" prefix and without translating printable ASCII to their equivalent ASCII.
What I want to achieve:
>>> dec_to_hex(512)
"\x00\x02"
The following solutions I found while searching for an answer aren't good enough and I'll explain why:
This one didn't put the "\x" prefix and don't translated it to bytes
>>> hex(512)
'0x200'
This examples is really close but receives hexadecimal (I need decimal) and translated chars to ascii:
>>> from binascii import unhexlify
>>> unhexlify('65004100430005FF70000000')
'e\x00A\x00C\x00\x05\xffp\x00\x00\x00'
This one translates ASCIIinto chars:
>>> import struct
>>> struct.pack('<h', 512)
'\x00\x02'
>>> struct.pack('<h', 97)
'a\x00'

Related

Convert hex to decimal/string in python

So I wrote this small socket program to send a udp packet and receive the response
sock.sendto(data, (MCAST_GRP, MCAST_PORT))
msgFromServer = sock.recvfrom(1024)
banner=msgFromServer[0]
print(msgFromServer[0])
#name = msgFromServer[0].decode('ascii', 'ignore')
#print(name)
Response is
b'\xff\xff\xff\xffI\x11server banner\x00map\x00game\x00Counter-Strike: Global Offensive\x00\xda\x02\x00\x10\x00dl\x01\x011.38.2.2\x00\xa1\x87iempty,secure\x00\xda\x02\x00\x00\x00\x00\x00\x00'
Now the thing is I wanted to convert all hex value to decimal,
I tried the decode; but then I endup loosing all the hex values.
How can I convert all the hex values to decimal in my case
example: \x13 = 19
EDIT: I guess better way to iterate my question is
How do I convert only the hex values to decimal in the given response
There are two problems here:
handling the non-ASCII bytes
handling \xhh sequences which are legitimate characters in Python strings
We can address both with a mix of regular expressions and string methods.
First, decode the bytes to ASCII using the backslashreplace error handler to avoid losing the non-ASCII bytes.
>>> import re
>>>
>>> decoded = msgFromServer[0].decode('ascii', errors='backslashreplace')
>>> decoded
'\\xff\\xff\\xff\\xffI\x11server banner\x00map\x00game\x00Counter-Strike: Global Offensive\x00\\xda\x02\x00\x10\x00dl\x01\x011.38.2.2\x00\\xa1\\x87iempty,secure\x00\\xda\x02\x00\x00\x00\x00\x00\x00'
Next, use a regular expression to replace the non-ASCII '\\xhh' sequences with their numeric equivalents:
>>> temp = re.sub(r'\\x([a-fA-F0-9]{2})', lambda m: str(int(m.group(1), 16)), decoded)
>>> temp
'255255255255I\x11server banner\x00map\x00game\x00Counter-Strike: Global Offensive\x00218\x02\x00\x10\x00dl\x01\x011.38.2.2\x00161135iempty,secure\x00218\x02\x00\x00\x00\x00\x00\x00'
Finally, map \xhh escape sequences to their decimal values using str.translate:
>>> tt = str.maketrans({x: str(x) for x in range(32)})
>>> final = temp.translate(tt)
>>> final
'255255255255I17server banner0map0game0Counter-Strike: Global Offensive021820160dl111.38.2.20161135iempty,secure02182000000'
You can first convert the bytes representation to hex using the bytes.hex method and then cast it into an integer with the appropriate base with int(x, base)
>>> b'\x13'.hex()
'13'
>>> int(b'\x13'.hex(), 16)
19
Assume v contains the response, what you are asking for is
[int(i) for i in v]
I suspect it's not what you want, it is what I read from the question

python string to hex with escaped hex values

I have a string like "Some characters \x00\x80\x34 and then some other characters". How can I convert the regular characters to their hex equivalent, while converting \x00 to the actual 00 hex value?
binascii.hexlify() considers '\', 'x', '0', '0' as actual characters.
Later edit:
The string itself is produced by another function. When I print it, it actually prints "\x00".
As my understanding you are trying to convert only the characters that are not hex values to hex. It would help if you gave a sample input string that you are trying to convert to hex.
Also you can convert to hex values using just the built in encoding and decoding method. That should take care of what you are trying to do. The following three lines are what I ran in terminal of my machine, and gave the output you are expecting. I also attached an image to show you. Hope it helps:
aStr = "Some characters \x00\x80\x34 and then some other characters"
aStr.encode("hex")
aStr.encode("hex").decode("hex")
It's unclear what you're asking, since binascii.hexlify should work:
>>> import binascii
>>> s = "\x00\x80\x34"
>>> binascii.hexlify(s)
'008034'
>>> s = "foobar \x00\x80\x34 foobar"
>>> binascii.hexlify(s)
'666f6f6261722000803420666f6f626172'
foorbar = 666f6f6261722, space = 20
↳ https://docs.python.org/3/library/binascii.html

How do I convert an int representing a UTF-8 character into a Unicode code point?

Let us use the character Latin Capital Letter a with Ogonek (U+0104) as an example.
I have an int that represents its UTF-8 encoded form:
my_int = 0xC484
# Decimal: `50308`
# Binary: `0b1100010010000100`
If use the unichr function i get: \uC484 or 쒄 (U+C484)
But, I need it to output: Ą
How do I convert my_int to a Unicode code point?
To convert the integer 0xC484 to the bytestring '\xc4\x84' (the UTF-8 representation of the Unicode character Ą), you can use struct.pack():
>>> import struct
>>> struct.pack(">H", 0xC484)
'\xc4\x84'
... where > in the format string represents big-endian, and H represents unsigned short int.
Once you have your UTF-8 bytestring, you can decode it to Unicode as usual:
>>> struct.pack(">H", 0xC484).decode("utf8")
u'\u0104'
>>> print struct.pack(">H", 0xC484).decode("utf8")
Ą
>>> int2bytes(0xC484).decode('utf-8')
u'\u0104'
>>> print(_)
Ą
where int2bytes() is defined here.
Encode the number to a hex string, using hex() or %x. Then you can interpret that as a series of hex bytes using the hex decoder. Finally use the utf-8 decoder to get a unicode string:
def weird_utf8_integer_to_unicode(n):
s= '%x' % n
if len(s) % 2:
s= '0'+s
return s.decode('hex').decode('utf-8')
The len check is in case the first byte is in the range 0x1–0xF, which would leave it missing a leading zero. This should be able to cope with any length string and any character (however encoding a byte sequence in an integer like this would be unable to preseve leading zero bytes).

Convert a string of escaped hex to a hex number in Python

How do I convert a string of escaped hex characters to a single hex number?
Reading from a socket I get a string of \xFF\xFF\xFF.., etc. I want to convert this to a hex number, 0xFFFFFF, keeping any insignificant 0s, so \x00\xFF should be 0x00FF. I have tried various functions from binascii, but I have not had any luck.
Using struct.unpack:
>>> struct.unpack('>I', '\xFF\xFF\xFF\xFF') # >, !: big (network) endian
(4294967295,)
>>> hex(struct.unpack('>I', '\xFF\xFF\xFF\xFF')[0])
'0xffffffff'
>>> struct.unpack('>H', '\x00\xff')
(255,)
>>> '0x{:04x}'.format(struct.unpack('>H', '\x00\xff')[0])
'0x00ff'
>>> '0x{:04X}'.format(struct.unpack('>H', '\x00\xff')[0])
'0x00FF'
Format characters used:
I: 4-bytes unsigned int
H: 2-bytes unsinged int
UPDATE
If you indent to convert arbitrary binary string into hex string, you can use binascii.hexlify:
>>> import binascii
>>> '0x' + binascii.hexlify('\xFF\xFF\xFF')
'0xffffff'
>>> '0x' + binascii.hexlify('\x00\x00\xFF')
'0x0000ff'

Bytes in a unicode Python string

In Python 2, Unicode strings may contain both unicode and bytes:
a = u'\u0420\u0443\u0441\u0441\u043a\u0438\u0439 \xd0\xb5\xd0\xba'
I understand that this is absolutely not something one should write in his own code, but this is a string that I have to deal with.
The bytes in the string above are UTF-8 for ек (Unicode \u0435\u043a).
My objective is to get a unicode string containing everything in Unicode, which is to say Русский ек (\u0420\u0443\u0441\u0441\u043a\u0438\u0439 \u0435\u043a).
Encoding it to UTF-8 yields
>>> a.encode('utf-8')
'\xd0\xa0\xd1\x83\xd1\x81\xd1\x81\xd0\xba\xd0\xb8\xd0\xb9 \xc3\x90\xc2\xb5\xc3\x90\xc2\xba'
Which then decoded from UTF-8 gives the initial string with bytes in them, which is not good:
>>> a.encode('utf-8').decode('utf-8')
u'\u0420\u0443\u0441\u0441\u043a\u0438\u0439 \xd0\xb5\xd0\xba'
I found a hacky way to solve the problem, however:
>>> repr(a)
"u'\\u0420\\u0443\\u0441\\u0441\\u043a\\u0438\\u0439 \\xd0\\xb5\\xd0\\xba'"
>>> eval(repr(a)[1:])
'\\u0420\\u0443\\u0441\\u0441\\u043a\\u0438\\u0439 \xd0\xb5\xd0\xba'
>>> s = eval(repr(a)[1:]).decode('utf8')
>>> s
u'\\u0420\\u0443\\u0441\\u0441\\u043a\\u0438\\u0439 \u0435\u043a'
# Almost there, the bytes are proper now but the former real-unicode characters
# are now escaped with \u's; need to un-escape them.
>>> import re
>>> re.sub(u'\\\\u([a-f\\d]+)', lambda x : unichr(int(x.group(1), 16)), s)
u'\u0420\u0443\u0441\u0441\u043a\u0438\u0439 \u0435\u043a' # Success!
This works fine but looks very hacky due to its use of eval, repr, and then additional regex'ing of the unicode string representation. Is there a cleaner way?
In Python 2, Unicode strings may contain both unicode and bytes:
No, they may not. They contain Unicode characters.
Within the original string, \xd0 is not a byte that's part of a UTF-8 encoding. It is the Unicode character with code point 208. u'\xd0' == u'\u00d0'. It just happens that the repr for Unicode strings in Python 2 prefers to represent characters with \x escapes where possible (i.e. code points < 256).
There is no way to look at the string and tell that the \xd0 byte is supposed to be part of some UTF-8 encoded character, or if it actually stands for that Unicode character by itself.
However, if you assume that you can always interpret those values as encoded ones, you could try writing something that analyzes each character in turn (use ord to convert to a code-point integer), decodes characters < 256 as UTF-8, and passes characters >= 256 as they were.
(In response to the comments above): this code converts everything that looks like utf8 and leaves other codepoints as is:
a = u'\u0420\u0443\u0441 utf:\xd0\xb5\xd0\xba bytes:bl\xe4\xe4'
def convert(s):
try:
return s.group(0).encode('latin1').decode('utf8')
except:
return s.group(0)
import re
a = re.sub(r'[\x80-\xFF]+', convert, a)
print a.encode('utf8')
Result:
Рус utf:ек bytes:blää
The problem is that your string is not actually encoded in a specific encoding. Your example string:
a = u'\u0420\u0443\u0441\u0441\u043a\u0438\u0439 \xd0\xb5\xd0\xba'
Is mixing python's internal representation of unicode strings with utf-8 encoded text. If we just consider the 'special' characters:
>>> orig = u'\u0435\u043a'
>>> bytes = u'\xd0\xb5\xd0\xba'
>>> print orig
ек
>>> print bytes
ек
But you say, bytes is utf-8 encoded:
>>> print bytes.encode('utf-8')
ек
>>> print bytes.encode('utf-8').decode('utf-8')
ек
Wrong! But what about:
>>> bytes = '\xd0\xb5\xd0\xba'
>>> print bytes
ек
>>> print bytes.decode('utf-8')
ек
Hurrah.
So. What does this mean for me? It means you're (probably) solving the wrong problem. What you should be asking us/trying to figure out is why your strings are in this form to begin with and how to avoid it/fix it before you have them all mixed up.
You should convert unichrs to chrs, then decode them.
u'\xd0' == u'\u00d0' is True
$ python
>>> import re
>>> a = u'\u0420\u0443\u0441\u0441\u043a\u0438\u0439 \xd0\xb5\xd0\xba'
>>> re.sub(r'[\000-\377]*', lambda m:''.join([chr(ord(i)) for i in m.group(0)]).decode('utf8'), a)
u'\u0420\u0443\u0441\u0441\u043a\u0438\u0439 \u0435\u043a'
r'[\000-\377]*' will match unichrs u'[\u0000-\u00ff]*'
u'\xd0\xb5\xd0\xba' == u'\u00d0\u00b5\u00d0\u00ba'
You use utf8 encoded bytes as unicode code points (this is the PROBLEM)
I solve the problem by pretending those mistaken unichars as the corresponding bytes
I search all these mistaken unichars, and convert them to chars, then decode them.
If I'm wrong, please tell me.
You've already got an answer, but here's a way to unscramble UTF-8-like Unicode sequences that is less likely to decode latin-1 Unicode sequences in error. The re.sub function:
Matches Unicode characters < U+0100 that resemble valid UTF-8 sequences (ref: RFC 3629).
Encodes the Unicode sequence into its equivalent latin-1 byte sequence.
Decodes the sequence using UTF-8 back into Unicode.
Replaces the original UTF-8-like sequence with the matching Unicode character.
Note this could still match a Unicode sequence if just the right characters appear next to each other, but it is much less likely.
import re
# your example
a = u'\u0420\u0443\u0441\u0441\u043a\u0438\u0439 \xd0\xb5\xd0\xba'
# printable Unicode characters < 256.
a += ''.join(chr(n) for n in range(32,256)).decode('latin1')
# a few UTF-8 characters decoded as latin1.
a += ''.join(unichr(n) for n in [2**7-1,2**7,2**11-1,2**11]).encode('utf8').decode('latin1')
# Some non-BMP characters
a += u'\U00010000\U0010FFFF'.encode('utf8').decode('latin1')
print repr(a)
# Unicode codepoint sequences that resemble UTF-8 sequences.
p = re.compile(ur'''(?x)
\xF0[\x90-\xBF][\x80-\xBF]{2} | # Valid 4-byte sequences
[\xF1-\xF3][\x80-\xBF]{3} |
\xF4[\x80-\x8F][\x80-\xBF]{2} |
\xE0[\xA0-\xBF][\x80-\xBF] | # Valid 3-byte sequences
[\xE1-\xEC][\x80-\xBF]{2} |
\xED[\x80-\x9F][\x80-\xBF] |
[\xEE-\xEF][\x80-\xBF]{2} |
[\xC2-\xDF][\x80-\xBF] # Valid 2-byte sequences
''')
def replace(m):
return m.group(0).encode('latin1').decode('utf8')
print
print repr(p.sub(replace,a))
###Output
u'\u0420\u0443\u0441\u0441\u043a\u0438\u0439 \xd0\xb5\xd0\xba
!"#$%&'()*+,-./0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~\x7f\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff\x7f\xc2\x80\xdf\xbf\xe0\xa0\x80\xf0\x90\x80\x80\xf4\x8f\xbf\xbf'
u'\u0420\u0443\u0441\u0441\u043a\u0438\u0439 \u0435\u043a
!"#$%&'()*+,-./0123456789:;<=>?#ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~\x7f\x80\x81\x82\x83\x84\x85\x86\x87\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f\x90\x91\x92\x93\x94\x95\x96\x97\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7\xa8\xa9\xaa\xab\xac\xad\xae\xaf\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7\xe8\xe9\xea\xeb\xec\xed\xee\xef\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff\x7f\x80\u07ff\u0800\U00010000\U0010ffff'
I solved it by
unicodeText.encode("utf-8").decode("unicode-escape").encode("latin1")

Categories