I need to save a Matplot plot to a temporary file that I control since this code would be in a python Flask REST service.
I tried this:
fp = tempfile.NamedTemporaryFile()
return_base64 = ""
with fp:
fp.write(plt.savefig) # THIS IS WRONG....
with open(fp.name, 'rb') as open_it:
open_it.seek(0)
return_base64 = str(base64.b64encode(open_it.read()))
# strip off leading b and ' and trailing '
return_base64 = return_base64[2: len(return_base64) - 1]
open_it.close()
fp.close()
But, "fp.write" doesn't work with saving the plt.savefig as I did above.
My issue is that I'm using the PRAAT phonetic library and there does not seem to be a way to use the "Sound()" method inside a REST service. Thus, I'm doing lots of temporary files to work around this.
So, how do I write the matplotib plot to a named temporary file?
Appreciation and thanks in advance.
i am sharing this code it is storing jpg file in my temporary folder
import io
buf = io.BytesIO()
plt.savefig(buf, format="jpg")
#print(buf.getvalue()) return bytes of plot
fp = tempfile.NamedTemporaryFile()
# print(fp.name) return file name
with open(f"{fp.name}.jpg",'wb') as ff:
ff.write(buf.getvalue())
buf.close()
Related
My script generates PDF (PyPDF2.pdf.PdfFileWriter object) and stores it in the variable.
I need to work with it as file-like object further in script. But now I have to write it to HDD first. Then I have to open it as file to work with it.
To prevent this unnecessary writing/reading operations I found many solutions - StringIO, BytesIO and so on. But I cannot find what exactly can help me in my case.
As far as I understand - I need to "convert" (or write to RAM) PyPDF2.pdf.PdfFileWriter object to file-like object to work directly with it.
Or there is another method that fits exactly to my case?
UPDATE - here is code-sample
from pdfrw import PdfReader, PdfWriter, PageMerge
from PyPDF2 import PdfFileReader, PdfFileWriter
red_file = PdfFileReader(open("file_name.pdf", 'rb'))
large_pages_indexes = [1, 7, 9]
large = PdfFileWriter()
for i in large_pages_indexes:
p = red_file.getPage(i)
large.addPage(p)
# here final data have to be written (I would like to avoid that)
with open("virtual_file.pdf", 'wb') as tmp:
large.write(tmp)
# here I need to read exported "virtual_file.pdf" (I would like to avoid that too)
with open("virtual_file.pdf", 'rb') as tmp:
pdf = PdfReader(tmp) # here I'm starting to work with this file using another module "pdfrw"
print(pdf)
To avoid slow disk I/O it appears you want to replace
with open("virtual_file.pdf", 'wb') as tmp:
large.write(tmp)
with open("virtual_file.pdf", 'rb') as tmp:
pdf = PdfReader(tmp)
with
buf = io.BytesIO()
large.write(buf)
buf.seek(0)
pdf = PdfReader(buf)
Also, buf.getvalue() is available to you.
I'm using Python 3.5 on Windows.
I have this little piece of code that downloads close to one hundred CSV files from different URLs stored in Links.txt:
from urllib import request
new_lines = 'None'
def download_data(csv_url):
response = request.urlopen(csv_url)
csv = response.read()
csv_str = str(csv)
global new_lines
new_lines = csv_str.split("\\n")
with open('Links.txt') as file:
for line in file:
URL = line
file_name = URL[54:].rsplit('.ST', 1)[0]
download_data(URL)
save_destination = 'C:\\Download data\\Data\\' + file_name + '.csv'
fx = open(save_destination, "w")
for lines in new_lines:
fx.write(lines+"\n")
fx.close()
The problem is that the CSV files generated always starts with b ' and after the last line of the data follows another ' and a couple of empty rows to wrap things up. I do not see these characters when I look at the files from the browser (before I download them).
This creates problems when I want to import and use the data in a database. Do you have any idea on why this happens and how I can get the code to write the CSV files correctly?
Tips that can make the code faster/better, or adjustments for other flaws in the code are obviously very welcome.
What's happening is that urllib treats its stream as bytes - any string that looks like b'...' means it's a byte-string.
Your immediate problem could be solved by encoding the stream by calling decode('utf-8') (as Chedy2149 shows), which will convert the data's bytes.
However, you can complete elide this problem by downloading the file directly to disk. You go through the work of downloading it, splitting it, and writing it to disk, but all that seems unnecessary because your code just ultimately writes the file's contents to disk without additional work against them.
You can use urllib.request.urlretrieve and download to a file directly.
Here's an example, modified from your code.
import urllib.request
def download_data(url, file_to_save):
filename, rsp = urllib.request.urlretrieve(url, file_to_save)
# Assuming everything worked, the file has been downloaded to file_to_save
with open('Links.txt') as file:
for line in file:
url = line.rstrip() # adding this here to remove extraneous '\n' from string
file_name = url[54:].rsplit('.ST', 1)[0]
save_destination = 'C:\\Download data\\Data\\' + file_name + '.csv'
download_data(url, save_destination)
In the download_data function you need to convert the byte string csv response to a plain string.
Try replacing csv_str = str(csv) by csv_str = csv.decode('utf-8').
This should properly decode the byte string returned by response.read().
The problem is that your function returns a bytes object; str() doesn't convert it to a string the way you expect. Use csv_str = csv.decode() instead.
I am using a 3rd party python library that creates .svg's (specifically for evolutionary trees) which has a render function for tree objects. What I want is the svg in string form that I can edit. Currently I save the svg and read the file as follows:
tree.render('location/filename.svg', other_args...)
f = open('location/filename.svg', "r")
svg_string = f.read()
f.close()
This works, but is it possible to use a tempfile instead? So far I have:
t = tempfile.NamedTemporaryFile()
tmpdir = tempfile.mkdtemp()
t.name = os.path.join(tmpdir, 'tmp.svg')
tree.render(t.name, other_args...)
svg_string = t.read()
t.close()
Can anyone explain why this doesn't work and/or how I could do this without creating a file (which I just have to delete later). The svg_string I go on to edit for use in a django application.
EDIT: Importantly, the render function can also be used to create other filetypes e.g. .png - so the .svg extension needs to be specified.
You should not define yourself the name of your temporary file. When you create it, the name will be randomly generated. You can use it directly.
t = tempfile.NamedTemporaryFile()
tree.render(t.name, other_args...)
t.file.seek(0) #reset the file pointer to the beginning
svg_string = t.read()
t.close()
Using Python V.3.3
I was wondering how to create a .PNG (or any other picture file) by using hex data that was written in a notepad document. Currently it reads a picture file. From there it turns it into hex format then saves to a notepad document. It then reads the notepad file and grabs the data.
The problem I am having is that when it tries to write a new picture file it does, but there is no data stored. No matter what I try I end up with a blank, 0 byte picture. How do I fix this? Is there any specific format I need to use on my getbyte variable? Any help would be much appreciated. I'm trying to get this to work to possible send/store data easier for 2D game maps.
import binascii
f = open("c:/test1.png", "rb")
ima = f.read()
f.close()
print (binascii.hexlify(ima))
f = open("file123.txt", "w")
f.write(binascii.hexlify(ima).decode('utf-8'))
f.close()
#-----------
f = open("file123.txt", "r+")
getbyte = f.read()
f.close()
getbytes = (binascii.unhexlify(getbyte))
getbyte = (binascii.hexlify(getbytes))
f = open("filetest.png", "wb")
f.write(getbyte)
f.close
#-----------
To save it as a binary image, write getbytes:
getbytes = (binascii.unhexlify(getbyte))
f = open("filetest.png", "wb")
f.write(getbytes)
f.close
I think you are also looking at the wrong directory, try to save under a different name and see if it creates that file.
I am not too sure the best way to word this, but what I want to do, is read a pdf file, make various modifications, and save the modified pdf over the original file. As of now, I am able to save the modified pdf to a separate file, but I am looking to replace the original, not create a new file.
Here is my current code:
from pyPdf import PdfFileWriter, PdfFileReader
output = PdfFileWriter()
input = PdfFileReader(file('input.pdf', 'rb'))
blank = PdfFileReader(file('C:\\BLANK.pdf', 'rb'))
# Copy the input pdf to the output.
for page in range(int(input.getNumPages())):
output.addPage(input.getPage(page))
# Add a blank page if needed.
if (input.getNumPages() % 2 != 0):
output.addPage(blank.getPage(0))
# Write the output to pdf.
outputStream = file('input.pdf', 'wb')
output.write(outputStream)
outputStream.close()
If i change the outputStream to a different file name, it works fine, I just cant save over the input file because it is still being used. I have tried to .close() the stream, but it was giving me errors as well.
I have a feeling this has a fairly simple solution, I just haven't had any luck finding it.
Thanks!
You can always rename the temporary output file to the old file:
import os
f = open('input.pdf', 'rb')
# do stuff to temp.pdf
f.close()
os.rename('temp.pdf', 'input.pdf')
You said you tried to close() the stream but got errors? You could delete the PdfFileReader objects to ensure nobody still has access to the stream. And then close the stream.
from pyPdf import PdfFileWriter, PdfFileReader
inputStream = file('input.pdf', 'rb')
blankStream = file('C:\\BLANK.pdf', 'rb')
output = PdfFileWriter()
input = PdfFileReader(inputStream)
blank = PdfFileReader(blankStream)
...
del input # PdfFileReader won't mess with the stream anymore
inputStream.close()
del blank
blankStream.close()
# Write the output to pdf.
outputStream = file('input.pdf', 'wb')
output.write(outputStream)
outputStream.close()
If the PDFs are small enough (that'll depend on your platform), you could just read the whole thing in, close the file, modify the data, then write the whole thing back over the same file.