How to get bytes of image in StringIO container - python

I am am getting an image from email attachment and will never touch disk. Image will be placed into a StringIO container and processed by PIL. How do I get the file size in bytes?
image_file = StringIO('image from email')
im = Image.open(image_file)

Use StringIO's .tell() method by seeking to the end of the file:
>>> from StringIO import StringIO
>>> s = StringIO("foobar")
>>> s.tell()
0
>>> s.seek(0, 2)
>>> s.tell()
6
In your case:
image_file = StringIO('image from email')
image_file.seek(0, 2) # Seek to the end
bytes = image_file.tell() # Get no. of bytes
image_file.seek(0) # Seek to the start
im = Image.open(image_file)

Suppose you have:
>>> s = StringIO("cat\u2014jack")
>>> s.seek(0, os.SEEK_END)
>>> print(s.tell())
8
This however is incorrect. \u2014 is an emdash character - a single character but it is 3 bytes long.
>>> len("\u2014")
1
>>> len("\u2014".encode("utf-8"))
3
StringIO may not use utf-8 for storage either. It used to use UCS-2 or UCS-4 but since pep 393 may be using utf-8 or some other encoding depending on the circumstance.
What ultimately matters is the binary representation you ultimately go with. If you're encoding text, and you'll eventually write the file out encoded as utf-8 then you have to encode the value in it's entirety to know how many bytes it will take up. This is because as a variable-length encoding, utf-8 characters may require multiple bytes to encode.
You could do something like:
>>> s = StringIO("cat\u2014jack")
>>> size = len(s.getvalue().encode('utf-8'))
10

Related

How to decode longest sub-bytes into str?

Suppose I read a long bytes object from somewhere, knowing it is utf-8 encoded. But the read may not fully consume the available content so that the last character in the stream may be incomplete. Calling bytes.decode() on this object may result in a decode error. But what really fails is only the last few bytes. Is there a function that works in this case, returning the longest decoded string and the remaining bytes?
utf-8 encodes a character into at most 4 bytes, so trying to decode truncated bytes should work, but a vast majority of computation will be wasted, and I don't really like this solution.
To give a simple but concrete example:
>>> b0 = b'\xc3\x84\xc3\x96\xc3'
>>> b1 = b'\x9c\xc3\x84\xc3\x96\xc3\x9c'
>>> (b0 + b1).decode()
>>> 'ÄÖÜÄÖÜ'
(b0 + b1).decode() is fine, but b0.decode() will raise. The solution should be able to decode b0 for as much as possible and return the bytes that cannot be decoded.
You are describing the basic usage of io.TextIOWrapper: a buffered text stream over a binary stream.
>>> import io
>>> txt = 'before\N{PILE OF POO}after'
>>> b = io.BytesIO(txt.encode('utf-8'))
>>> t = io.TextIOWrapper(b)
>>> t.read(5)
'befor'
>>> t.read(1)
'e'
>>> t.read(1)
'💩'
>>> t.read(1)
'a'
Contrast with reading a bytes stream directly, where it would be possible to read halfway through an encoded pile of poo:
>>> b.seek(0)
0
>>> b.read(5)
b'befor'
>>> b.read(1)
b'e'
>>> b.read(1)
b'\xf0'
>>> b.read(1)
b'\x9f'
>>> b.read(1)
b'\x92'
>>> b.read(1)
b'\xa9'
>>> b.read(1)
b'a'
Specify encoding="utf-8" if you want to be explicit. The default encoding, i.e. locale.getpreferredencoding(False), would usually be utf-8 anyway.
As I mentioned in the comments under #wim's answer, I think you could use the codecs.iterdecode() incremental decoder to do this. Since it's a generator function, there's no need to manually save and restore its state between iterative calls to it.
Here's how how it might be used to handle a situation like the one you described:
import codecs
from random import randint
def reader(sequence):
""" Yield random length chunks of sequence until exhausted. """
plural = lambda word, n, ending='s': (word+ending) if n > 1 else word
i = 0
while i < len(sequence):
size = randint(1, 4)
chunk = sequence[i: i+size]
hexrepr = '0x' + ''.join('%02X' % b for b in chunk)
print('read {} {}: {}'.format(size, plural('byte', len(chunk)), hexrepr))
yield chunk
i += size
bytes_obj = b'\xc3\x84\xc3\x96\xc3\x9c\xc3\x84\xc3\x96\xc3\x9c' # 'ÄÖÜÄÖÜ'
for decoded in codecs.iterdecode(reader(bytes_obj), 'utf-8'):
print(decoded)
Sample output:
read 3 bytes: 0xC384C3
Ä
read 1 byte: 0x96
Ö
read 1 byte: 0xC3
read 3 bytes: 0x9CC384
ÜÄ
read 2 bytes: 0xC396
Ö
read 4 bytes: 0xC39C
Ü

why is sys.stdin.read(4).encode('utf-8') returning more than 4 bytes?

I am passing a JSON object from Chrome to my Python app's stdin via the Chrome/JavaScript sendNativeMessage function.
Sometimes, the below code works. Other times (I believe on larger messages), it does not work. I'm not sure what I'm doing wrong, but I will say that sometimes sys.stdin.read(4).encode('utf-8') seems to read 7 bytes instead of the specified 4 bytes, and that's when it breaks with a "struct.error: unpack requires a byte object of length 4" message.
Can someone let me know what I'm doing wrong here?
# On Windows, the default I/O mode is O_TEXT. Set this to O_BINARY
# to avoid unwanted modifications of the input/output streams.
import os, msvcrt
msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
# Read the message length (first 4 bytes).
#for line in sys.stdin:
text_length_bytes = sys.stdin.read(4).encode('utf-8')
logging.info( text_length_bytes )
# Unpack message length as 4 byte integer.
text_length = struct.unpack('i', text_length_bytes)[0]
logging.info( text_length )
# Read the text of the message.
text = json.loads( sys.stdin.read(text_length) )
One Unicode character may consist of more than one byte:
In [4]: len('ü'.encode('utf-8'))
Out[4]: 2
As you want to decode those 4 bytes as integer, you probably want to read them as bytes (instead of str) from stdin in the first place:
In [8]: type(sys.stdin.read(4))
aoeu
Out[8]: str
In [9]: type(sys.stdin.buffer.read(4))
aoeu
Out[9]: bytes

How to change the bytes in a file?

I'm making a encryption program and I need to open file in binary mode to access non-ascii and non-printable characters, I need to check if character from a file is letter, number, symbol or unprintable character. That means I have to check 1 by 1 if bytes (when they are decoded to ascii) match any of these characters:
{^9,dzEV=Q4ciT+/s};fnq3BFh% #2!k7>YSU<GyD\I]|OC_e.W0M~ua-jR5lv1wA`#8t*xr'K"[P)&b:g$p(mX6Ho?JNZL
I think I could encode these characters above to binary and then compare them with bytes. I don't know how to do this.
P.S. Sorry for bad English and binary misunderstanding. (I hope you
know what I mean by bytes, I mean characters in binary mode like
this):
\x01\x00\x9a\x9c\x18\x00
There are two major string types in Python: bytestrings (a sequence of bytes) that represent binary data and Unicode strings (a sequence of Unicode codepoints) that represent human-readable text. It is simple to convert one into another (☯):
unicode_text = bytestring.decode(character_encoding)
bytestring = unicode_text.encode(character_encoding)
If you open a file in binary mode e.g., 'rb' then file.read() returns a bytestring (bytes type):
>>> b'A' == b'\x41' == chr(0b1000001).encode()
True
There are several methods that can be used to classify bytes:
string methods such as bytes.isdigit():
>>> b'1'.isdigit()
True
string constants such as string.printable
>>> import string
>>> b'!' in string.printable.encode()
True
regular expressions such as \d
>>> import re
>>> bool(re.match(br'\d+$', b'123'))
True
classification functions in curses.ascii module e.g., curses.ascii.isprint()
>>> from curses import ascii
>>> bytearray(filter(ascii.isprint, b'123'))
bytearray(b'123')
bytearray is a mutable sequence of bytes — unlike a bytestring you can change it inplace e.g., to lowercase every 3rd byte that is uppercase:
>>> import string
>>> a = bytearray(b'ABCDEF_')
>>> uppercase = string.ascii_uppercase.encode()
>>> a[::3] = [b | 0b0100000 if b in uppercase else b
... for b in a[::3]]
>>> a
bytearray(b'aBCdEF_')
Notice: b'ad' are lowercase but b'_' remained the same.
To modify a binary file inplace, you could use mmap module e.g., to lowercase 4th column in every other line in 'file':
#!/usr/bin/env python3
import mmap
import string
uppercase = string.ascii_uppercase.encode()
ncolumn = 3 # select 4th column
with open('file', 'r+b') as file, \
mmap.mmap(file.fileno(), 0, access=mmap.ACCESS_WRITE) as mm:
while True:
mm.readline() # ignore every other line
pos = mm.tell() # remember current position
if not mm.readline(): # EOF
break
if mm[pos + ncolumn] in uppercase:
mm[pos + ncolumn] |= 0b0100000 # lowercase
Note: Python 2 and 3 APIs differ in this case. The code uses Python 3.
Input
ABCDE1
FGHIJ
ABCDE
FGHI
Output
ABCDE1
FGHiJ
ABCDE
FGHi
Notice: 4th column became lowercase on 2nd and 4h lines.
Typically if you want to change a file: you read from the file, write modifications to a temporary file, and on success you move the temporary file inplace of the original file:
#!/usr/bin/env python3
import os
import string
from tempfile import NamedTemporaryFile
caesar_shift = 3
filename = 'file'
def caesar_bytes(plaintext, shift, alphabet=string.ascii_lowercase.encode()):
shifted_alphabet = alphabet[shift:] + alphabet[:shift]
return plaintext.translate(plaintext.maketrans(alphabet, shifted_alphabet))
dest_dir = os.path.dirname(filename)
chunksize = 1 << 15
with open(filename, 'rb') as file, \
NamedTemporaryFile('wb', dir=dest_dir, delete=False) as tmp_file:
while True: # encrypt
chunk = file.read(chunksize)
if not chunk: # EOF
break
tmp_file.write(caesar_bytes(chunk, caesar_shift))
os.replace(tmp_file.name, filename)
Input
abc
def
ABC
DEF
Output
def
ghi
ABC
DEF
To convert the output back, set caesar_shift = -3.
To open a file in binary mode you use the open("filena.me", "rb") command. I've never used the command personally, but that should get you the information you need.

Python Read Certain Number of Bytes After Character

I'm dealing with a character separated hex file, where each field has a particular start code. I've opened the file as 'rb', but I was wondering, after I get the index of the startcode using .find, how do I read a certain number of bytes from this position?
This is how I am loading the file and what I am attempting to do
with open(someFile, 'rb') as fileData:
startIndex = fileData.find('(G')
data = fileData[startIndex:7]
where 7 is the number of bytes I want to read from the index returned by the find function. I am using python 2.7.3
You can get the position of a substring in a bytestring under python2.7 like this:
>>> with open('student.txt', 'rb') as f:
... data = f.read()
...
>>> data # holds the French word for student: élève
'\xc3\xa9l\xc3\xa8ve\n'
>>> len(data) # this shows we are dealing with bytes here, because "élève\n" would be 6 characters long, had it been properly decoded!
8
>>> len(data.decode('utf-8'))
6
>>> data.find('\xa8') # continue with the bytestring...
4
>>> bytes_to_read = 3
>>> data[4:4+bytes_to_read]
'\xa8ve'
You can look for the special characters, and for compatibility with Python3k, it's better if you prepend the character with a b, indicating these are bytes (in Python2.x, it will work without though):
>>> data.find(b'è') # in python2.x this works too (unfortunately, because it has lead to a lot of confusion): data.find('è')
3
>>> bytes_to_read = 3
>>> pos = data.find(b'è')
>>> data[pos:pos+bytes_to_read] # when you use the syntax 'n:m', it will read bytes in a bytestring
'\xc3\xa8v'
>>>

bz2 decompress with Python 3.4 - TypeError: 'str' does not support the buffer interface

There are similar errors but I could not find a solution for bz2.
The following program fails on the decompress:
import bz2
un = 'BZh91AY&SYA\xaf\x82\r\x00\x00\x01\x01\x80\x02\xc0\x02\x00 \x00!\x9ah3M\x07<]\xc9\x14\xe1BA\x06\xbe\x084'
pw = 'BZh91AY&SY\x94$|\x0e\x00\x00\x00\x81\x00\x03$ \x00!\x9ah3M\x13<]\xc9\x14\xe1BBP\x91\xf08'
decoded_un = bz2.decompress(un)
decoded_pw = bz2.decompress(pw)
print(decoded_un)
print(decoded_pw)
I tried using bytes(un, 'UTF-8) but that would not work. I think I did not have this problem in Python 3.3.
EDIT: this was for the Python challenge I have two bits of code which work thanks to Martijn:
import bz2
un_saved = 'BZh91AY&SYA\xaf\x82\r\x00\x00\x01\x01\x80\x02\xc0\x02\x00 \x00!\x9ah3M\x07<]\xc9\x14\xe1BA\x06\xbe\x084'
pw_saved = 'BZh91AY&SY\x94$|\x0e\x00\x00\x00\x81\x00\x03$ \x00!\x9ah3M\x13<]\xc9\x14\xe1BBP\x91\xf08'
print(bz2.decompress(un_saved.encode('latin1')))
print(bz2.decompress(pw_saved.encode('latin1')))
This one works from the webpage:
# http://www.pythonchallenge.com/pc/def/integrity.html
import urllib.request
import re
import os.path
import bz2
fname = "008.html"
if not os.path.isfile(fname):
url = 'http://www.pythonchallenge.com/pc/def/integrity.html'
response = urllib.request.urlopen(url)
webpage = response.read().decode("utf-8")
with open(fname, "w") as fh:
fh.write(webpage)
with open(fname, "r") as fh:
webpage = fh.read()
re_un = '\\nun: \'(.*)\'\\n'
m = re.search(re_un, webpage)
un = m.group(1)
print(un)
pw_un = '\\npw: \'(.*)\'\\n'
m = re.search(pw_un, webpage)
pw = m.group(1)
print(pw)
unde = un.encode('latin-1').decode('unicode_escape').encode('latin1')
pwde = pw.encode('latin-1').decode('unicode_escape').encode('latin1')
decoded_un = bz2.decompress(unde)
decoded_pw = bz2.decompress(pwde)
print(decoded_un)
print(decoded_pw)
The bz2 library deals with bytes objects, not strings:
un = b'BZh91AY&SYA\xaf\x82\r\x00\x00\x01\x01\x80\x02\xc0\x02\x00 \x00!\x9ah3M\x07<]\xc9\x14\xe1BA\x06\xbe\x084'
pw = b'BZh91AY&SY\x94$|\x0e\x00\x00\x00\x81\x00\x03$ \x00!\x9ah3M\x13<]\xc9\x14\xe1BBP\x91\xf08'
In other words, using bytes() works just fine, just make sure you use the correct encoding. UTF-8 is not that encoding; if you have bytes masking as string character codepoints, use Latin-1 to encode instead; Latin 1 maps characters one-on-one to bytes:
un = un.encode('latin1')
or
un = bytes(un, 'latin1')
Also see the Python Unicode HOWTO:
Latin-1, also known as ISO-8859-1, is a similar encoding. Unicode code points 0–255 are identical to the Latin-1 values, so converting to this encoding simply requires converting code points to byte values; if a code point larger than 255 is encountered, the string can’t be encoded into Latin-1.
I'll leave the decoding to you. Have fun with the Python Challenge!
Note that if you loaded these characters as they are from a webpage, they will not by ready-made bytes! You'll have the characters '\', 'x', 8 and 2 rather than a codepoint with hex value 82. You'd need to interpret those sequences as a Python string literal first:
>>> un = r'BZh91AY&SYA\xaf\x82\r\x00\x00\x01\x01\x80\x02\xc0\x02\x00 \x00!\x9ah3M\x07<]\xc9\x14\xe1BA\x06\xbe\x084'
>>> un
'BZh91AY&SYA\\xaf\\x82\\r\\x00\\x00\\x01\\x01\\x80\\x02\\xc0\\x02\\x00 \\x00!\\x9ah3M\\x07<]\\xc9\\x14\\xe1BA\\x06\\xbe\\x084'
>>> un.encode('latin-1').decode('unicode_escape')
'BZh91AY&SYA¯\x82\r\x00\x00\x01\x01\x80\x02À\x02\x00 \x00!\x9ah3M\x07<]É\x14áBA\x06¾\x084'
>>> un.encode('latin-1').decode('unicode_escape').encode('latin1')
b'BZh91AY&SYA\xaf\x82\r\x00\x00\x01\x01\x80\x02\xc0\x02\x00 \x00!\x9ah3M\x07<]\xc9\x14\xe1BA\x06\xbe\x084'
Note the double backslashes in the representation of un. Only the last bytes result is then decompressable!

Categories