for i, pokemon in enumerate(pokemon):
pokeurl = f"https://img.pokemondb.net/sprites/bank/normal/{pokemon}.png"
r = requests.get(pokeurl, stream=True)
open('pokemon.png', 'wb').write(r.content)
#do_stuff
Hi, I'm Kind of new to python. Here for the first time the function is called, the images are saved (6 times, the size of pokemon). But then the second time I'm calling the same function, it saves a corrupted image.
I'd imagine its down to how you open the file.
As written in the docs
Calling f.write() without using the with keyword or calling f.close() might result in the arguments of f.write() not being completely written to the disk, even if the program exits successfully.
So just use a context manager
with open('pokemon.png', 'wb') as f:
f.write(r.content)
Your code works fine after using an unique name to save the images locally:
import requests
import os
pokemons = [
"glaceon", "aerodactyl", "charizard", "blastoise", "greninja", "haxorus",
"flareon", "pikachu"
]
for i, pokemon in enumerate(pokemons):
pokeurl = f"https://img.pokemondb.net/sprites/bank/normal/{pokemon}.png"
r = requests.get(pokeurl, stream=True)
open(f'/tmp/p/{pokemon}.png', 'wb').write(r.content)
# verify images saved
print(os.listdir('/tmp/p'))
Out:
['charizard.png', 'flareon.png', 'pikachu.png', 'haxorus.png', 'blastoise.png', 'aerodactyl.png', 'greninja.png', 'glaceon.png']
Related
What I need to do is to write some messages on a .txt file, close it and send it to a server. This happens in a infinite loop, so the code should look more or less like this:
from requests_toolbelt.multipart.encoder import MultipartEncoder
num = 0
while True:
num += 1
filename = f"example{num}.txt"
with open(filename, "w") as f:
f.write("Hello")
f.close()
mp_encoder = MultipartEncoder(
fields={
'file': ("file", open(filename, 'rb'), 'text/plain')
}
)
r = requests.post("my_url/save_file", data=mp_encoder, headers=my_headers)
time.sleep(10)
The post works if the file is created manually inside my working directory, but if I try to create it and write on it through code, I receive this response message:
500 - Internal Server Error
System.IO.IOException: Unexpected end of Stream, the content may have already been read by another component.
I don't see the file appearing in the project window of PyCharm...I even used time.sleep(10) because at first, I thought it could be a time-related problem, but I didn't solve the problem. In fact, the file appears in my working directory only when I stop the code, so it seems the file is held by the program even after I explicitly called f.close(): I know the with function should take care of closing files, but it didn't look like that so I tried to add a close() to understand if that was the problem (spoiler: it was not)
I solved the problem by using another file
with open(filename, "r") as firstfile, open("new.txt", "a+") as secondfile:
secondfile.write(firstfile.read())
with open(filename, 'w'):
pass
r = requests.post("my_url/save_file", data=mp_encoder, headers=my_headers)
if r.status_code == requests.codes.ok:
os.remove("new.txt")
else:
print("File not saved")
I make a copy of the file, empty the original file to save space and send the copy to the server (and then delete the copy). Looks like the problem was that the original file was held open by the Python logging module
Firstly, can you change open(f, 'rb') to open("example.txt", 'rb'). In open, you should be passing file name not a closed file pointer.
Also, you can use os.path.abspath to show the location to know where file is written.
import os
os.path.abspath('.')
Third point, when you are using with context manager to open a file, you don't close the file. The context manger supposed to do it.
with open("example.txt", "w") as f:
f.write("Hello")
I have this code for server
#app.route('/get', methods=['GET'])
def get():
return send_file("token.jpg", attachment_filename=("token.jpg"), mimetype='image/jpg')
and this code for getting response
r = requests.get(url + '/get')
And i need to save file from response to hard drive. But i cant use r.files. What i need to do in these situation?
Assuming the get request is valid. You can use use Python's built in function open, to open a file in binary mode and write the returned content to disk. Example below.
file_content = requests.get('http://yoururl/get')
save_file = open("sample_image.png", "wb")
save_file.write(file_content.content)
save_file.close()
As you can see, to write the image to disk, we use open, and write the returned content to 'sample_image.png'. Since your server-side code seems to be returning only one file, the example above should work for you.
You can set the stream parameter and extract the filename from the HTTP headers. Then the raw data from the undecoded body can be read and saved chunk by chunk.
import os
import re
import requests
resp = requests.get('http://127.0.0.1:5000/get', stream=True)
name = re.findall('filename=(.+)', resp.headers['Content-Disposition'])[0]
dest = os.path.join(os.path.expanduser('~'), name)
with open(dest, 'wb') as fp:
while True:
chunk = resp.raw.read(1024)
if not chunk: break
fp.write(chunk)
I am trying to retrieve data from an API and immediate write the JSON response directly to a file and not store any part of the response in memory. The reason for this requirement is because I'm executing this script on a AWS Linux EC2 that only has 2GB of memory, and if I try to hold everything in memory and then write the responses to a file, the process will fail due to not enough memory.
I've tried using f.write() as well as sys.stdout.write(), but both of these approaches seemed to only write the file after all the queries were executed. While this worked with my small example, it didn't work when dealing with my actual data.
The problem with both approaches below is that the file doesn't populate until the loop is complete. This will not work with my actual process, as the machine doesn't have enough memory to hold the all the responses in memory.
How can I adapt either of the approaches below, or come up with something new, to write data received from the API immediately to a file without saving anything in memory?
Note: I'm using Python 3.7 but happy to update if there is something that would make this easier.
My Approach 1
# script1.py
import requests
import json
with open('data.json', 'w') as f:
for i in range(0, 100):
r = requests.get("https://httpbin.org/uuid")
data = r.json()
f.write(json.dumps(data) + "\n")
f.close()
My Approach 2
# script2.py
import request
import json
import sys
for i in range(0, 100):
r = requests.get("https://httpbin.org/uuid")
data = r.json()
sys.stdout.write(json.dumps(data))
sys.stdout.write("\n")
With approach 2, I tried using the > to redirect the output to a file:
script2.py > data.json
You can use response.iter_content to download the content in chunks. For example:
import requests
url = 'https://httpbin.org/uuid'
with requests.get(url, stream=True) as r:
r.raise_for_status()
with open('data.json', 'wb') as f_out:
for chunk in r.iter_content(chunk_size=8192):
f_out.write(chunk)
Saves data.json with content:
{
"uuid": "991a5843-35ca-47b3-81d3-258a6d4ce582"
}
I was making a image downloading project for a website, but I encountered some strange behavior using tqdm. In the code below I included two options for making the tqdm progress bar. In option one I did not passed the iteratable content from response into the tqdm directly, while the second option I did. Although the code looks similar, the result is strangely different.
This is what the progress bar's result looks like using Option 1
This is what the progress bar's result looks like using Option 2
Option one is the result I desire but I just couldn't find an explanation for the behavior of using Option 2. Can anyone help me explain this behavior?
import requests
from tqdm import tqdm
import os
# Folder to store in
default_path = "D:\\Downloads"
def download_image(url):
"""
This function will download the given url's image with proper filename labeling
If a path is not provided the image will be downloaded to the Downloads folder
"""
# Establish a Session with cookies
s = requests.Session()
# Fix for pixiv's request you have to add referer in order to download images
response = s.get(url, headers={'User-Agent': 'Mozilla/5.0',
'referer': 'https://www.pixiv.net/'}, stream=True)
file_name = url.split("/")[-1] # Retrieve the file name of the link
together = os.path.join(default_path, file_name) # Join together path with the file_name. Where to store the file
file_size = int(response.headers["Content-Length"]) # Get the total byte size of the file
chunk_size = 1024 # Consuming in 1024 byte per chunk
# Option 1
progress = tqdm(total=file_size, unit='B', unit_scale=True, desc="Downloading {file}".format(file=file_name))
# Open the file destination and write in binary mode
with open(together, "wb") as f:
# Loop through each of the chunks in response in chunk_size and update the progres by calling update using
# len(chunk) not chunk_size
for chunk in response.iter_content(chunk_size):
f.write(chunk)
progress.update(len(chunk))
# Option 2
"""progress = tqdm(response.iter_content(chunk_size),total=file_size, unit='B', unit_scale=True, desc="Downloading {file}".format(file = file_name))
with open(together, "wb") as f:
for chunk in progress:
progress.update(len(chunk))
f.write(chunk)
# Close the tqdm object and file object as good practice
"""
progress.close()
f.close()
if __name__ == "__main__":
download_image("Image Link")
Looks like an existing bug with tqdm. https://github.com/tqdm/tqdm/issues/766
Option 1:
Provides tqdm the total size
On each iteration, update progress. Expect the progress bar to keep moving.
Works fine.
Option 2:
Provides tqdm the total size along with a generator function that tracks the progress.
On each iteration, it should automatically get the update from generator and push the progress bar.
However, you also call progress.update manually, which should not be the case.
Instead let the generator do the job.
But this doesn't work either, and the issue is already reported.
Suggestion on Option 1:
To avoid closing streams manually, you can enclose them inside with statement. Same applies to tqdm as well.
# Open the file destination and write in binary mode
with tqdm(total=file_size,
unit='B',
unit_scale=True,
desc="Downloading {file}".format(file=file_name)
) as progress, open(file_name, "wb") as f:
# Loop through each of the chunks in response in chunk_size and update the progres by calling update using
# len(chunk) not chunk_size
for chunk in response.iter_content(chunk_size):
progress.update(len(chunk))
f.write(chunk)
I would like to print a PDF file with an external printer. However, since I'm about to open, create or transform multiple files in some loop, I would like to print the thing without the need of saving it as a PDF file in every iteration.
Simplified code looks like this:
import PyPDF2
import os
pdf_in = open('tubba.pdf', 'rb')
pdf_reader = PyPDF2.PdfFileReader(pdf_in)
pdf_writer = PyPDF2.PdfFileWriter()
page = pdf_reader.getPage(0)
page.rotateClockwise(90)
# Some other operations done on the page, such as scaling, cropping etc.
pdf_writer.addPage(page)
pdf_out = open('rotated.pdf', 'wb')
pdf_writer.write(pdf_out)
pdf_print = os.startfile('rotated.pdf', 'print')
pdf_out.close()
pdf_in.close()
Is there any way to print "page", or "pdf_writer"?
Best regards
You can just use variables.
Eg.
path = 'C\yourfile.pdf'
os.startfile(path) #just pass the variable here