Read int values stored in a file in python - python

I am writing a program to encrypt a file using RSA algo in python without using Crypto library. i have generated the keys and the e, n and d are stored in a .pem file. Now in another strict where the encrypting is taking place i am using the e, d and n values, but every time i am running the script an error is showing :
File "rsaencrypt.py", line 91, in <module>
main()
File "rsaencrypt.py", line 62, in main
encrypt = pow(content, e, n)
TypeError: unsupported operand type(s) for pow(): 'bytes','_io.TextIOWrapper', '_io.TextIOWrapper'
heres how i am opening the file in the encryption script and using pow() to encrypt the files:
n = open('nfile.pem', 'r')
c = open('cfile.pem', 'r')
d = open('dfile.pem', 'r'))
encrypt = pow(content, e, n)
I have searched the internet for how to read the int value from a file but i have found nothing.
Heres how i am saving the values in efile, dfile, and nfile:
#saving the values of n, d and e for further use
efile = open('efile.pem', 'w')
efile.write('%d' %(int(e)))
efile.close()
dfile = open('dfile.pem', 'w')
dfile.write('%d' %(int(d)))
dfile.close()
nfile = open('nfile.pem', 'w')
nfile.write('%d' % (int(n)))
nfile.close()
the values are stored like this: 564651648965132684135419864..............454
Now as want to encrypt the files i need to read the integer values written in the efile, dfile and nfile to use the values in the pow() as arguments.
Looking forward to suggestions. Thank you.

The open() function returns a file object, not the int. You need to convert returned object into int value by:
n = open('nfile.pem', 'r')
n_value = int(list(n)[0])
etc.
Another option (same result) is:
n = open('nfile.pem', 'r')
n_value = int(n.read())

The recommended way is to use with, this ensures your file is closed once you are done with it rather than waiting for garbage collection or explicitly calling f.close() to close your file.
n_results = []
with open('nfile.pem', 'r') as f:
for line in f:
#do something
try:
n.append(int(i))
except TypeError:
n.append(0) #you can replace 0 with any value to indicate a processing error
Also, utilize try-except block in case you have noise in your file that cannot be converted to integers. n_results return a list of all your values from your files which you can use to aggregate or combine them later for a single output.
This would be a better foundation as your project scales and if you deal with more data.

Related

How to read through binary (.dat) file and return greatest int value python

I am having trouble working with the integers I loop through and print out in the binary file.
I have a main program that creates a binary file, writes x amount of random integers to the file, then closes the file.
*Throughout these code snippets, I import dump and load from pickle
from pickle import dump
from random import randint
output_file = open('file.dat', 'wb')
# 10 random integers
for i in range(10):
dump(randint(1, 100), output_file)
output_file.close()
I have created a program that will open this file, unpickle each integer and print them out. However, now I also want to work with these numbers: max, min, sum, etc. When I try to produce code that (I thought) would do this, I am getting:
33 Traceback (most recent call last):
File "binary_int_practice.py", line 13, in <module>
for i in load(input_file):
TypeError: 'int' object is not iterable
My code is below:
input_file = open('file.dat', 'rb')
print("Here are the integers:")
while True:
try:
i = load(input_file)
print(i, end=' ')
big = 0
for i in load(input_file):
if i > big:
big = i
print('The max number in the file is: ', big)
except EOFError:
input_file.close()
break
Can someone explain or help me understand where I am going wrong?
Thanks
load returns the next value read from the file; in your case, each value read is an int (just as you wrote them). It does not return an iterable that you can loop over.
So you'll have to get each number with its own call to load.
you have to use a list, fill it and add it to the file using "dump". because at each iteration the "randint" number changes in the file.
here is the code that works well
from pickle import dump
from random import randint
output_file = open('file.dat', 'wb')
# 10 random integers
data = []
for i in range(10):
data.append(randint(1, 100))
dump(data, output_file)
output_file.close()

Writing array as string (python 3)

I am trying to write entire array as text or csv file.
from array import array as pyarray
import csv
tmp1 = (x for x in range(10))
tmp2 = (x+10 for x in range(10))
arr1 = pyarray('l')
with open ('fileoutput','wb') as fil1:
for i in range(10):
val = next(tmp1) - next(tmp2)
arr1.append(val)
arr1.tofile(fil1)
The problem with this code is it writes as binary file. I want to write as string, so that it would be readable. It is possible to create a loop and write file line by line, however real problem has millions of line in arr1. What is optimized way to write in human readable form?
Edit:
After changing above code line to with open ('fileoutput','w') as fil1: i.e. 'wb' to 'w', there is error:
write() argument must be str, not bytes. So this is not solved the problem. Any suggestions?
You opened the file in wb mode. This writes in binary. Write to the file in w mode to write it as a string.
with open ('fileoutput','w') as fil1:
You can try appending the results to a string then save it into a file, as following:
from array import array as pyarray
tmp1 = (x for x in range(10))
tmp2 = (x+10 for x in range(10))
arr1 = pyarray('l')
fileoutput_str = str(arr1)+'\n'
for i in range(10):
val = next(tmp1) - next(tmp2)
fileoutput_str += str(val)+'\n'
fileoutput_fn = 'fileoutput'
fileoutput_fo = open(fileoutput_fn, 'w')
fileoutput_fo.write(fileoutput_str)
fileoutput_fo.close()
You will have to remove the binary option b in order to write string into the file.

Writing to a binary file python

I want to write something to a binary file using python.
I am simply doing:
import numpy as np
f = open('binary.file','wb')
i=4
j=5.55
f.write('i'+'j') #where do i specify that i is an integer and j is a double?
g = open('binary.file','rb')
first = np.fromfile(g,dtype=np.uint32,count = 1)
second = np.fromfile(g,dtype=np.float64,count = 1)
print first, second
The output is just:
[] []
I know it is very easy to do this in Matlab "fwrite(binary.file, i, 'int32');", but I want to do it in python.
You appear to be having some confusion about types in Python.
The expression 'i' + 'j' is adding two strings together. This results in the string ij, which is most likely written to the file as two bytes.
The variable i is already an int. You can write it to a file as a 4-byte integer in a couple of different ways (which also apply to the float j):
Use the struct module as detailed in how to write integer number in particular no of bytes in python ( file writing). Something like this:
import struct
with open('binary.file', 'wb') as f:
f.write(struct.pack("i", i))
You would use the 'd' specifier to write j.
Use the numpy module to do the writing for you, which is especially convenient since you are already using it to read the file. The method ndarray.tofile is made just for this purpose:
i = 4
j = 5.55
with open('binary.file', 'wb') as f:
np.array(i, dtype=np.uint32).tofile(f)
np.array(j, dtype=np.float64).tofile(f)
Note that in both cases I use open as a context manager when writing the file with a with block. This ensures that the file is closed, even if an error occurs during writing.
That's because you are trying to write a string(edited) into a binary file. You also don't close the file before trying to read it again.
If you want to write ints or strings to a binary file try adding the below code:
import numpy as np
import struct
f = open('binary.file','wb')
i = 4
if isinstance(i, int):
f.write(struct.pack('i', i)) # write an int
elif isinstance(i, str):
f.write(i) # write a string
else:
raise TypeError('Can only write str or int')
f.close()
g = open('binary.file','rb')
first = np.fromfile(g,dtype=np.uint32,count = 1)
second = np.fromfile(g,dtype=np.float64,count = 1)
print first, second
I'll leave it to you to figure out the floating number.
print first, second
[4] []
The more pythonic file handler way:
import numpy as np
import struct
with open ('binary.file','wb') as f:
i = 4
if isinstance(i, int):
f.write(struct.pack('i', i)) # write an int
elif isinstance(i, str):
f.write(i) # write a string
else:
raise TypeError('Can only write str or int')
with open('binary.file','rb') as g:
first = np.fromfile(g,dtype=np.uint32,count = 1)
second = np.fromfile(g,dtype=np.float64,count = 1)
print first, second

Python file with buffer interface

I am reading a binary file containing a couple of dissimilar C-like structs, mostly like this:
f = open('binary_file.bin', 'rb')
header = struct.unpack("2h3i", f.read(struct.calcsize("2h3i"))
I don't like the repetition of the fmt-string "2h3i" and just stumbled over struct.unpack_from(). Unfortunately a file object seems not to have a buffer interface (I use Python 2.7).
Since the files are typically several GB in size, data = f.read() and use data instead of f, is not an option.
I found that using mmap might be the way to go, but unfortunately it seems, that the 'read pointer' is not advanced when unpacking_from a mmap.
f = open('binary_file.bin', 'rb')
mm = mmap.mmap(f.fileno(), 0, prot=mmap.PROT_READ)
print mm.tell() # --> 0
header = struct.unpack_from("2h3i", mm) # yeah!
print mm.tell() # --> 0 why?
But I could go ahead and implement my own unpack_from like this.
def unpack_from(fmt, buffer):
result = struct.unpack_from(fmt, buffer)
buffer.seek(struct.calcsize(fmt), 1)
return result
Is this considered a reasonable solution?

Why I can't read anything with File.open in python when I open the file to write first?

f = open('day_temps.txt','w')
f.write("10.3,10.1,9.9,9.9,9.8,9.6,9.0,10.1,10.2,11.1")
f.close
def get_stats(file_name):
temp_file = open(file_name,'r')
temp_array = temp_file.read().split(',')
number_array = []
for value in temp_array:
number_array.append(float(value))
number_array.sort()
max_value = number_array[-1]
min_value = number_array[0]
sum_value = 0
for value in number_array:
sum_value += value
avg_value = sum_value / len(number_array)
return min_value, max_value, avg_value
mini, maxi, mean = get_stats('day_temps.txt')
print "({0:.5}, {1:.5}, {2:.5})".format(mini, maxi, mean)
without the first 3 line, the code works, with it I can't read nothing in the temp_file, I don't get it, any idea?
You never closed the file with this line of code:
f.close
Either use f.close(), or the with syntax, which auto-closes the file handle and prevents problems like this:
with open('day_temps.txt', 'w') as handle:
handle.write("10.3,10.1,9.9,9.9,9.8,9.6,9.0,10.1,10.2,11.1")
Also, you can condense your code significantly:
with open('day_temps.txt', 'w') as handle:
handle.write("10.3,10.1,9.9,9.9,9.8,9.6,9.0,10.1,10.2,11.1")
def get_stats(file_name):
with open(file_name, 'r') as handle:
numbers = map(float, handle.read().split(','))
return min(numbers), max(numbers), sum(numbers) / len(numbers)
if __name__ == '__main__':
stats = get_stats('day_temps.txt')
print "({0:.5}, {1:.5}, {2:.5})".format(*stats)
In line 3, f.close should read f.close(). To force the file to write immediately (rather than when the file is closed), you can call f.flush() after writing: see Why file writing does not happen when it is suppose to happen in the program flow? for more details.
Alternately, the file will close naturally when the script is completely ended (including the closing of any interactive interpreter windows, like IDLE). In some cases, forgetting to properly flush or close a file can lead to extremely confusing behavior, such as bugs in interactive sessions that would not be seen if running the script from the command line.
f.close is just invoking the method object for printing rather than calling the method.In the REPL you get this:
f.close
<built-in method close of file object at 0x00000000027B1ED0>
Add your method call brackets.

Categories