Extract first 20 bytes in a binary header - python

I'm trying to learn how to do this in Python, play'd arround with the psuedo code bellow but couln't come up with anything worth a penny
with open(file, "rb") as f:
byte = f.read(20) # read the first 20 bytes?
while byte != "":
print f.read(1)
In the end I'd like to end up with a code capable of the following: https://stackoverflow.com/a/2538034/2080223
But I'm ofcourse interested in learning how to get there so any pointers would be much apphriciated!

Very close
with open(file, "rb") as f:
byte = f.read(20) # read the first 20 bytes? *Yes*
will indeed read the first 20 bytes.
But
while byte != "":
print f.read(1) # print a single byte?
will (as you expect) read a single byte and print it, but it will print it forever, since your loop condition will always be true.
Its not clear what you want to do here, but if you just want to print a single byte, removing the while loop will do that:
print f.read(1)
If you want to print single bytes until the end of file, consider:
while True:
byte = f.read(1)
if byte == "": break
print byte
Alternatively, if you're looking for specific bytes within the first 20 you read into byte, you can use iterable indexing:
with open(file, "rb") as f:
byte = f.read(20)
print byte[0] # First byte of the 20 bytes / first byte of the file
print byte[1] # Second byte of the 20 bytes / ...
# ...
Or as Lucas suggests in the comments, you could iterate over the string byte (it's a string by the way, that's returned from read()):
with open(file, "rb") as f:
byte = f.read(20)
for b in byte:
print b
You may also be interested in the position of the byte, and it's hexidecimal value (for values like 0x0a, 0x0d, etc):
with open(file, "rb") as f:
byte = f.read(20)
for i,b in enumerate(byte):
print "%02d: %02x" % (i,b)

Related

Problem reading valid last line of a file [duplicate]

I have a text file which contains a time stamp on each line. My goal is to find the time range. All the times are in order so the first line will be the earliest time and the last line will be the latest time. I only need the very first and very last line. What would be the most efficient way to get these lines in python?
Note: These files are relatively large in length, about 1-2 million lines each and I have to do this for several hundred files.
To read both the first and final line of a file you could...
open the file, ...
... read the first line using built-in readline(), ...
... seek (move the cursor) to the end of the file, ...
... step backwards until you encounter EOL (line break) and ...
... read the last line from there.
def readlastline(f):
f.seek(-2, 2) # Jump to the second last byte.
while f.read(1) != b"\n": # Until EOL is found ...
f.seek(-2, 1) # ... jump back, over the read byte plus one more.
return f.read() # Read all data from this point on.
with open(file, "rb") as f:
first = f.readline()
last = readlastline(f)
Jump to the second last byte directly to prevent trailing newline characters to cause empty lines to be returned*.
The current offset is pushed ahead by one every time a byte is read so the stepping backwards is done two bytes at a time, past the recently read byte and the byte to read next.
The whence parameter passed to fseek(offset, whence=0) indicates that fseek should seek to a position offset bytes relative to...
0 or os.SEEK_SET = The beginning of the file.
1 or os.SEEK_CUR = The current position.
2 or os.SEEK_END = The end of the file.
* As would be expected as the default behavior of most applications, including print and echo, is to append one to every line written and has no effect on lines missing trailing newline character.
Efficiency
1-2 million lines each and I have to do this for several hundred files.
I timed this method and compared it against against the top answer.
10k iterations processing a file of 6k lines totalling 200kB: 1.62s vs 6.92s.
100 iterations processing a file of 6k lines totalling 1.3GB: 8.93s vs 86.95.
Millions of lines would increase the difference a lot more.
Exakt code used for timing:
with open(file, "rb") as f:
first = f.readline() # Read and store the first line.
for last in f: pass # Read all lines, keep final value.
Amendment
A more complex, and harder to read, variation to address comments and issues raised since.
Return empty string when parsing empty file, raised by comment.
Return all content when no delimiter is found, raised by comment.
Avoid relative offsets to support text mode, raised by comment.
UTF16/UTF32 hack, noted by comment.
Also adds support for multibyte delimiters, readlast(b'X<br>Y', b'<br>', fixed=False).
Please note that this variation is really slow for large files because of the non-relative offsets needed in text mode. Modify to your need, or do not use it at all as you're probably better off using f.readlines()[-1] with files opened in text mode.
#!/bin/python3
from os import SEEK_END
def readlast(f, sep, fixed=True):
r"""Read the last segment from a file-like object.
:param f: File to read last line from.
:type f: file-like object
:param sep: Segment separator (delimiter).
:type sep: bytes, str
:param fixed: Treat data in ``f`` as a chain of fixed size blocks.
:type fixed: bool
:returns: Last line of file.
:rtype: bytes, str
"""
bs = len(sep)
step = bs if fixed else 1
if not bs:
raise ValueError("Zero-length separator.")
try:
o = f.seek(0, SEEK_END)
o = f.seek(o-bs-step) # - Ignore trailing delimiter 'sep'.
while f.read(bs) != sep: # - Until reaching 'sep': Read sep-sized block
o = f.seek(o-step) # and then seek to the block to read next.
except (OSError,ValueError): # - Beginning of file reached.
f.seek(0)
return f.read()
def test_readlast():
from io import BytesIO, StringIO
# Text mode.
f = StringIO("first\nlast\n")
assert readlast(f, "\n") == "last\n"
# Bytes.
f = BytesIO(b'first|last')
assert readlast(f, b'|') == b'last'
# Bytes, UTF-8.
f = BytesIO("X\nY\n".encode("utf-8"))
assert readlast(f, b'\n').decode() == "Y\n"
# Bytes, UTF-16.
f = BytesIO("X\nY\n".encode("utf-16"))
assert readlast(f, b'\n\x00').decode('utf-16') == "Y\n"
# Bytes, UTF-32.
f = BytesIO("X\nY\n".encode("utf-32"))
assert readlast(f, b'\n\x00\x00\x00').decode('utf-32') == "Y\n"
# Multichar delimiter.
f = StringIO("X<br>Y")
assert readlast(f, "<br>", fixed=False) == "Y"
# Make sure you use the correct delimiters.
seps = { 'utf8': b'\n', 'utf16': b'\n\x00', 'utf32': b'\n\x00\x00\x00' }
assert "\n".encode('utf8' ) == seps['utf8']
assert "\n".encode('utf16')[2:] == seps['utf16']
assert "\n".encode('utf32')[4:] == seps['utf32']
# Edge cases.
edges = (
# Text , Match
("" , "" ), # Empty file, empty string.
("X" , "X" ), # No delimiter, full content.
("\n" , "\n"),
("\n\n", "\n"),
# UTF16/32 encoded U+270A (b"\n\x00\n'\n\x00"/utf16)
(b'\n\xe2\x9c\x8a\n'.decode(), b'\xe2\x9c\x8a\n'.decode()),
)
for txt, match in edges:
for enc,sep in seps.items():
assert readlast(BytesIO(txt.encode(enc)), sep).decode(enc) == match
if __name__ == "__main__":
import sys
for path in sys.argv[1:]:
with open(path) as f:
print(f.readline() , end="")
print(readlast(f,"\n"), end="")
docs for io module
with open(fname, 'rb') as fh:
first = next(fh).decode()
fh.seek(-1024, 2)
last = fh.readlines()[-1].decode()
The variable value here is 1024: it represents the average string length. I choose 1024 only for example. If you have an estimate of average line length you could just use that value times 2.
Since you have no idea whatsoever about the possible upper bound for the line length, the obvious solution would be to loop over the file:
for line in fh:
pass
last = line
You don't need to bother with the binary flag you could just use open(fname).
ETA: Since you have many files to work on, you could create a sample of couple of dozens of files using random.sample and run this code on them to determine length of last line. With an a priori large value of the position shift (let say 1 MB). This will help you to estimate the value for the full run.
Here's a modified version of SilentGhost's answer that will do what you want.
with open(fname, 'rb') as fh:
first = next(fh)
offs = -100
while True:
fh.seek(offs, 2)
lines = fh.readlines()
if len(lines)>1:
last = lines[-1]
break
offs *= 2
print first
print last
No need for an upper bound for line length here.
Can you use unix commands? I think using head -1 and tail -n 1 are probably the most efficient methods. Alternatively, you could use a simple fid.readline() to get the first line and fid.readlines()[-1], but that may take too much memory.
This is my solution, compatible also with Python3. It does also manage border cases, but it misses utf-16 support:
def tail(filepath):
"""
#author Marco Sulla (marcosullaroma#gmail.com)
#date May 31, 2016
"""
try:
filepath.is_file
fp = str(filepath)
except AttributeError:
fp = filepath
with open(fp, "rb") as f:
size = os.stat(fp).st_size
start_pos = 0 if size - 1 < 0 else size - 1
if start_pos != 0:
f.seek(start_pos)
char = f.read(1)
if char == b"\n":
start_pos -= 1
f.seek(start_pos)
if start_pos == 0:
f.seek(start_pos)
else:
char = ""
for pos in range(start_pos, -1, -1):
f.seek(pos)
char = f.read(1)
if char == b"\n":
break
return f.readline()
It's ispired by Trasp's answer and AnotherParker's comment.
First open the file in read mode.Then use readlines() method to read line by line.All the lines stored in a list.Now you can use list slices to get first and last lines of the file.
a=open('file.txt','rb')
lines = a.readlines()
if lines:
first_line = lines[:1]
last_line = lines[-1]
w=open(file.txt, 'r')
print ('first line is : ',w.readline())
for line in w:
x= line
print ('last line is : ',x)
w.close()
The for loop runs through the lines and x gets the last line on the final iteration.
with open("myfile.txt") as f:
lines = f.readlines()
first_row = lines[0]
print first_row
last_row = lines[-1]
print last_row
Here is an extension of #Trasp's answer that has additional logic for handling the corner case of a file that has only one line. It may be useful to handle this case if you repeatedly want to read the last line of a file that is continuously being updated. Without this, if you try to grab the last line of a file that has just been created and has only one line, IOError: [Errno 22] Invalid argument will be raised.
def tail(filepath):
with open(filepath, "rb") as f:
first = f.readline() # Read the first line.
f.seek(-2, 2) # Jump to the second last byte.
while f.read(1) != b"\n": # Until EOL is found...
try:
f.seek(-2, 1) # ...jump back the read byte plus one more.
except IOError:
f.seek(-1, 1)
if f.tell() == 0:
break
last = f.readline() # Read last line.
return last
Nobody mentioned using reversed:
f=open(file,"r")
r=reversed(f.readlines())
last_line_of_file = r.next()
Getting the first line is trivially easy. For the last line, presuming you know an approximate upper bound on the line length, os.lseek some amount from SEEK_END find the second to last line ending and then readline() the last line.
with open(filename, "rb") as f:#Needs to be in binary mode for the seek from the end to work
first = f.readline()
if f.read(1) == '':
return first
f.seek(-2, 2) # Jump to the second last byte.
while f.read(1) != b"\n": # Until EOL is found...
f.seek(-2, 1) # ...jump back the read byte plus one more.
last = f.readline() # Read last line.
return last
The above answer is a modified version of the above answers which handles the case that there is only one line in the file

Read blocks from a file object until x bytes from the end

I need to read chunks of 64KB in loop, and process them, but stop at the end of file minus 16 bytes: the last 16 bytes are a tag metadata.
The file might be super large, so I can't read it all in RAM.
All the solutions I find are a bit clumsy and/or unpythonic.
with open('myfile', 'rb') as f:
while True:
block = f.read(65536)
if not block:
break
process_block(block)
If 16 <= len(block) < 65536, it's easy: it's the last block ever. So useful_data = block[:-16] and tag = block[-16:]
If len(block) == 65536, it could mean three things: that the full block is useful data. Or that this 64KB block is in fact the last block, so useful_data = block[:-16] and tag = block[-16:]. Or that this 64KB block is followed by another block of only a few bytes (let's say 3 bytes), so in this case: useful_data = block[:-13] and tag = block[-13:] + last_block[:3].
How to deal with this problem in a nicer way than distinguishing all these cases?
Note:
the solution should work for a file opened with open(...), but also for a io.BytesIO() object, or for a distant SFTP opened file (with pysftp).
I was thinking about getting the file object size, with
f.seek(0,2)
length = f.tell()
f.seek(0)
Then after each
block = f.read(65536)
we can know if we are far from the end with length - f.tell(), but again the full solution does not look very elegant.
you can just read in every iteration min(65536, L-f.tell()-16)
Something like this:
from pathlib import Path
L = Path('myfile').stat().st_size
with open('myfile', 'rb') as f:
while True:
to_read_length = min(65536, L-f.tell()-16)
block = f.read(to_read_length)
process_block(block)
if f.tell() == L-16
break
Did not ran this, but hope you get the gist of it.
The following method relies only on the fact that the f.read() method returns an empty bytes object upon end of stream (EOS). It thus could be adopted for sockets simply by replacing f.read() with s.recv().
def read_all_but_last16(f):
rand = random.Random() # just for testing
buf = b''
while True:
bytes_read = f.read(rand.randint(1, 40)) # just for testing
# bytes_read = f.read(65536)
buf += bytes_read
if not bytes_read:
break
process_block(buf[:-16])
buf = buf[-16:]
verify(buf[-16:])
It works by always leaving 16 bytes at the end of buf until EOS, then finally processing the last 16. Note that if there aren't at least 17 bytes in buf then buf[:-16] returns the empty bytes object.

Fetching address and values of opened file

I need to read a particular byte from a big binary file using Python. Using f.seek() takes a long time. Is there any method to fetch the address of the first byte of file and then add the address to reach to a particular byte in Python?
For example, given a text file containing
asddfrgd
get address of a, add 5, and then fetch the resulting value (which is 'r', assuming 1 byte for each letter).
Your description is not very clear. I assume that you want to fetch all values that are 5 bytes after an "a" in your example, such that "aardvark" gets "a" and "r" and the last "a" is skipped, because adding 5 goes beyond the end of the string.
Here's a solution that returns a list of such values by scanning the file linearly without jumping, byte by byte:
def find_past(fn, which, step):
""" Read file 'fn' and return all elements 'step' bytes after
each occurrence of 'which'.
"""
f = open(fn, "rb")
n = 0 # current byte address
res = [] # list of result bytes
next = [] # list of next byte addresses to consider
while True:
c = f.read(1)
if c == "":
break
if next and next[0] == n:
res.append(c)
next.pop(0)
if c == which:
next.append(n + step)
n += 1
f.close()
return res
Keeping track of the lists and byte offsets should be cheaper than f.seek(), but I haven't tried that on large data.

Failed to decode bytes - IronPython

I have some files with unicode data, the following code works fine when working with CPython to read those files, whereas the code crashes on IronPython saying "failed to decode bytes at index 67"
for f in self.list_of_files:
all_words_in_file = []
with codecs.open(f,encoding="utf-8-sig") as file_obj:
for line in file_obj:
all_words_in_file.extend(line.split(" "))
#print "Normalising unicode strings"
normal_list = []
#gets all the words and remove duplicate words
#the list will contain unique normalized words
for l in all_words_in_file:
normal_list.append(normalize('NFKC',l))
file_listing.update({f:normal_list})
return file_listing
I cannot understand the reason, is there another way to read unicode data in ironpython?
How about this one:
def lines(filename):
f = open(filename, "rb")
yield f.readline()[3:].strip().decode("utf-8")
for line in f:
yield line.strip().decode("utf-8")
f.close()
for line in lines("text-utf8-with-bom.txt"):
all_words_in_file.extend(line.split(" "))
I have also filed a IronPython bug https://ironpython.codeplex.com/workitem/34951
As long as you are feeding entire lines to decode, things will be ok.

Python write to file

I've got a little problem here.
I'm converting binary to ascii, in order to compress data.
All seems to work fine, but when I convert '11011011' to ascii and try to write it into file, I keep getting error
UnicodeEncodeError: 'charmap' codec can't encode character '\xdb' in position 0: character maps to
Here's my code:
byte = ""
handleR = open(self.getInput())
handleW = open(self.getOutput(), 'w')
file = handleR.readlines()
for line in file:
for a in range(0, len(line)):
chunk = result[ord(line[a])]
for b in chunk:
if (len(byte) < 8):
byte+=str(chunk[b])
else:
char = chr(eval('0b'+byte))
print(byte, char)
handleW.write(char)
byte = ""
handleR.close()
handleW.close()
Any help appreciated,
Thank You
I think you want:
handleR = open(self.getInput(), 'rb')
handleW = open(self.getOutput(), 'wb')
That will ensure you're reading and writing byte streams. Also, you can parse binary strings without eval:
char = chr(int(byte, 2))
And of course, it would be faster to use bit manipulation. Instead of appending to a string, you can use << (left shift) and | (bitwise or).
EDIT: For the actual writing, you can use:
handleW.write(bytes([char]))
This creates and writes a bytes from a list consisting of a single number.
EDIT 2: Correction, it should be:
handleW.write(bytes([int(byte, 2)]))
There is no need to use chr.

Categories