How to get python to successfully download large images from the internet - python

So I've been using
urllib.request.urlretrieve(URL, FILENAME)
to download images of the internet. It works great, but fails on some images. The ones it fails on seem to be the larger images- eg. http://i.imgur.com/DEKdmba.jpg. It downloads them fine, but when I try to open these files photo viewer gives me the error "windows photo viewer cant open this picture because the file appears to be damaged corrupted or too large".
What might be the reason it can't download these, and how can I fix this?
EDIT: after looking further, I dont think the problem is large images- it manages to download larger ones. It just seems to be some random ones that it can never download whenever I run the script again. Now I'm even more confused

In the past, I have used this code for copying from the internet. I have had no trouble with large files.
def download(url):
file_name = raw_input("Name: ")
u = urllib2.urlopen(url)
f = open(file_name, 'wb')
meta = u.info()
file_size = int(meta.getheaders("Content-Length")[0])
print "Downloading: %s Bytes: %s" % (file_name, file_size)
file_size_dl = 0
block_size = 8192
while True:
buffer = u.read(block_size)
if not buffer:
break

Here's the sample code for Python 3 (tested in Windows 7):
import urllib.request
def download_very_big_image():
url = 'http://i.imgur.com/DEKdmba.jpg'
filename = 'C://big_image.jpg'
conn = urllib.request.urlopen(url)
output = open(filename, 'wb') #binary flag needed for Windows
output.write(conn.read())
output.close()
For completeness sake, here's the equivalent code in Python 2:
import urllib2
def download_very_big_image():
url = 'http://i.imgur.com/DEKdmba.jpg'
filename = 'C://big_image.jpg'
conn = urllib2.urlopen(url)
output = open(filename, 'wb') #binary flag needed for Windows
output.write(conn.read())
output.close()

This should work: use requests module:
import requests
img_url = 'http://i.imgur.com/DEKdmba.jpg'
img_name = img_url.split('/')[-1]
img_data = requests.get(img_url).content
with open(img_name, 'wb') as handler:
handler.write(img_data)

Related

Download images from URL Python

I have problem with my script when i try download images from web url. It works on other page (offex.pl) but in my shop images are not working.
i just have all files but i can't open Files
my code:
import os
import time
import requests
from termcolor import colored
def get_folder(url):
all_folders= os.path.dirname(url)
folder=os.path.basename(all_folders)
return folder
def filename(url):
file=url[url.rfind("/") + 1:]
return file
def download(link):
error = []
ok = 0
fail = 0
root_folder = get_folder(link)
path = "{}/{}".format("download", root_folder)
if not os.path.exists(path):
os.makedirs(path)
url = link
file = filename(link)
result = requests.get(url, stream=True)
completeName = os.path.join("download", root_folder, file)
print(completeName)
if result.status_code == 200:
image = result.raw.read()
open(completeName, "wb").write(image)
ok += 1
succes = "{} {} {}".format(ok, colored("Pobrano:", "green"), url)
print(succes)
time.sleep(1)
else:
found_error = "{} {}".format(colored("Brak pliku!:", "red"), url)
print(found_error)
fail += 1
error.append("ID:{} NUMBER:{} link: {}".format(id, url))
with open("log.txt", "w") as filehandle:
for listitem in error:
filehandle.write('%s\n' % listitem)
print(colored("Pobrano plików: ", "green"), ok)
print(colored("Błędy pobierania: ", "red"), fail)
img_url="https://sw19048.smartweb-static.com/upload_dir/shop/misutonida_ec-med-384-ix.jpg"
download(img_url)
What Im doing wrong?
for example (https://offex.pl/images/detailed/11/94102_jeep_sbhn-8h.jpg) download OK
but for my shop url https://sw19048.smartweb-static.com/upload_dir/shop/misutonida_ec-med-384-ix.jpg is not working.
If you want to use requests module,you can use this:
import requests
response = requests.get("https://sw19048.smartweb-static.com/upload_dir/shop/misutonida_ec-med-384-ix.jpg")
with open('./Image.jpg','wb') as f:
f.write(response.content)
The issue is with the URL which you are using to download. Its not an issue, but a difference from other URL you have mentioned.
Let me explain
The URL https://offex.pl/images/detailed/11/94102_jeep_sbhn-8h.jpg returns an image as response with out any compression.
On the other hand, the shop URL https://sw19048.smartweb-static.com/upload_dir/shop/misutonida_ec-med-384-ix.jpg returns the image with gzip compression enabled in the headers.
So the raw response you get is compressed with gzip compression. You can decompress the response with gzip, if you know the compression is always gzip like below
import gzip
import io
image = result.raw.read()
buffer = io.BytesIO(image)
deflatedContent = gzip.GzipFile(fileobj=buffer)
open("D:/sample.jpg", "wb").write(deflatedContent.read())
Or you can use alternative libraries like urllib2 or similar ones, which takes care of decompression. I was trying to explain why it failed for your URL , but not for other. Hope this makes sense.
try :
import urllib2
def download_web_image(url):
request = urllib2.Request(url)
img = urllib2.urlopen(request).read()
with open('test.jpg', 'wb') as f:
f.write(img)
download_web_image("https://sw19048.smartweb-static.com/upload_dir/shop/misutonida_ec-med-384-ix.jpg")
It is working for your URL. I think the issue is with the request response of the used library.
from io import BytesIO
import requests
from PIL import Image
fileRequest = requests.get("https://sw19048.smartweb-static.com/upload_dir/shop/misutonida_ec-med-384-ix.jpg")
doc = Image.open(BytesIO(fileRequest.content))
doc.save("newFile.jpg")

Python Requests: downloaded image file binary not identical to original?

I am saving image file from the web using Python Requests. However, saved file is a little bit binary different than the original and a bit bigger. It still is a valid jpg file, but scrambled.
Here is the code:
import requests
import shutil
import os
if __name__ == "__main__":
image_url = 'http://www.123.com/image.jpg'
filename = 'out.jpg'
username = 'myusername'
password = 'mypasword'
path = os.path.join('c:/', filename )
r = requests.get(image_url, auth=(username, password), stream=True)
if r.status_code == 200:
with open(path, 'w') as f:
r.raw.decode_content = False
shutil.copyfileobj(r.raw, f)
print 'The End'
What am I doing wrong?
open(path, 'w')
should be:
open(path, 'wb')
The b is for "binary". This will make sure Python won't try to convert character encodings and newlines and reads or writes everything exactly as it is byte-for-byte.
Also see the open() documentation

Scrape a jpg file on webpage, then saving it using python

OK I'm trying to scrape jpg image from Gucci website. Take this one as example.
http://www.gucci.com/images/ecommerce/styles_new/201501/web_full/277520_F4CYG_4080_001_web_full_new_theme.jpg
I tried urllib.urlretrieve, which doesn't work becasue Gucci blocked the function. So I wanted to use requests to scrape the source code for the image and then write it into a .jpg file.
image = requests.get("http://www.gucci.com/images/ecommerce/styles_new/201501/web_full/277520_F4CYG_4080_001_web_full_new_theme.jpg").text.encode('utf-8')
I encoded it because if I don't, it keeps telling me that gbk cannot encode the string.
Then:
with open('1.jpg', 'wb') as f:
f.write(image)
looks good right? But the result is -- the jpg file cannot be opened. There's no image! Windows tells me the jpg file is damaged.
What could be the problem?
I'm thinking that maybe when I scraped the image, I lost some information, or some characters are wrongly scraped. But how can I find out which?
I'm thinking that maybe some information is lost via encoding. But if I don't encode, I cannot even print it, not to mention writing it into a file.
What could go wrong?
I am not sure about the purpose of your use of encode. You're not working with text, you're working with an image. You need to access the response as binary data, not as text, and use image manipulation functions rather than text ones. Try this:
from PIL import Image
from io import BytesIO
import requests
response = requests.get("http://www.gucci.com/images/ecommerce/styles_new/201501/web_full/277520_F4CYG_4080_001_web_full_new_theme.jpg")
bytes = BytesIO(response.content)
image = Image.open(bytes)
image.save("1.jpg")
Note the use of response.content instead of response.text. You will need to have PIL or Pillow installed to use the Image module. BytesIO is included in Python 3.
Or you can just save the data straight to disk without looking at what's inside:
import requests
response = requests.get("http://www.gucci.com/images/ecommerce/styles_new/201501/web_full/277520_F4CYG_4080_001_web_full_new_theme.jpg")
with open('1.jpg','wb') as f:
f.write(response.content)
A JPEG file is not text, it's binary data. So you need to use the request.content attribute to access it.
The code below also includes a get_headers() function, which can be handy when you're exploring a Web site.
import requests
def get_headers(url):
resp = requests.head(url)
print("Status: %d" % resp.status_code)
resp.raise_for_status()
for t in resp.headers.items():
print('%-16s : %s' % t)
def download(url, fname):
''' Download url to fname '''
print("Downloading '%s' to '%s'" % (url, fname))
resp = requests.get(url)
resp.raise_for_status()
with open(fname, 'wb') as f:
f.write(resp.content)
def main():
site = 'http://www.gucci.com/images/ecommerce/styles_new/201501/web_full/'
basename = '277520_F4CYG_4080_001_web_full_new_theme.jpg'
url = site + basename
fname = 'qtest.jpg'
try:
#get_headers(url)
download(url, fname)
except requests.exceptions.HTTPError as e:
print("%s '%s'" % (e, url))
if __name__ == '__main__':
main()
We call the .raise_for_status() method so that get_headers() and download() raise an Exception if something goes wrong; we catch the Exception in main() and print the relevant info.

Python equivalent of a given wget command

I'm trying to create a Python function that does the same thing as this wget command:
wget -c --read-timeout=5 --tries=0 "$URL"
-c - Continue from where you left off if the download is interrupted.
--read-timeout=5 - If there is no new data coming in for over 5 seconds, give up and try again. Given -c this mean it will try again from where it left off.
--tries=0 - Retry forever.
Those three arguments used in tandem results in a download that cannot fail.
I want to duplicate those features in my Python script, but I don't know where to begin...
There is also a nice Python module named wget that is pretty easy to use. Keep in mind that the package has not been updated since 2015 and has not implemented a number of important features, so it may be better to use other methods. It depends entirely on your use case. For simple downloading, this module is the ticket. If you need to do more, there are other solutions out there.
>>> import wget
>>> url = 'http://www.futurecrew.com/skaven/song_files/mp3/razorback.mp3'
>>> filename = wget.download(url)
100% [................................................] 3841532 / 3841532>
>> filename
'razorback.mp3'
Enjoy.
However, if wget doesn't work (I've had trouble with certain PDF files), try this solution.
Edit: You can also use the out parameter to use a custom output directory instead of current working directory.
>>> output_directory = <directory_name>
>>> filename = wget.download(url, out=output_directory)
>>> filename
'razorback.mp3'
urllib.request should work.
Just set it up in a while(not done) loop, check if a localfile already exists, if it does send a GET with a RANGE header, specifying how far you got in downloading the localfile.
Be sure to use read() to append to the localfile until an error occurs.
This is also potentially a duplicate of Python urllib2 resume download doesn't work when network reconnects
I had to do something like this on a version of linux that didn't have the right options compiled into wget. This example is for downloading the memory analysis tool 'guppy'. I'm not sure if it's important or not, but I kept the target file's name the same as the url target name...
Here's what I came up with:
python -c "import requests; r = requests.get('https://pypi.python.org/packages/source/g/guppy/guppy-0.1.10.tar.gz') ; open('guppy-0.1.10.tar.gz' , 'wb').write(r.content)"
That's the one-liner, here's it a little more readable:
import requests
fname = 'guppy-0.1.10.tar.gz'
url = 'https://pypi.python.org/packages/source/g/guppy/' + fname
r = requests.get(url)
open(fname , 'wb').write(r.content)
This worked for downloading a tarball. I was able to extract the package and download it after downloading.
EDIT:
To address a question, here is an implementation with a progress bar printed to STDOUT. There is probably a more portable way to do this without the clint package, but this was tested on my machine and works fine:
#!/usr/bin/env python
from clint.textui import progress
import requests
fname = 'guppy-0.1.10.tar.gz'
url = 'https://pypi.python.org/packages/source/g/guppy/' + fname
r = requests.get(url, stream=True)
with open(fname, 'wb') as f:
total_length = int(r.headers.get('content-length'))
for chunk in progress.bar(r.iter_content(chunk_size=1024), expected_size=(total_length/1024) + 1):
if chunk:
f.write(chunk)
f.flush()
A solution that I often find simpler and more robust is to simply execute a terminal command within python. In your case:
import os
url = 'https://www.someurl.com'
os.system(f"""wget -c --read-timeout=5 --tries=0 "{url}"""")
import urllib2
import time
max_attempts = 80
attempts = 0
sleeptime = 10 #in seconds, no reason to continuously try if network is down
#while true: #Possibly Dangerous
while attempts < max_attempts:
time.sleep(sleeptime)
try:
response = urllib2.urlopen("http://example.com", timeout = 5)
content = response.read()
f = open( "local/index.html", 'w' )
f.write( content )
f.close()
break
except urllib2.URLError as e:
attempts += 1
print type(e)
For Windows and Python 3.x, my two cents contribution about renaming the file on download :
Install wget module : pip install wget
Use wget :
import wget
wget.download('Url', 'C:\\PathToMyDownloadFolder\\NewFileName.extension')
Truely working command line example :
python -c "import wget; wget.download(""https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.17.2.tar.xz"", ""C:\\Users\\TestName.TestExtension"")"
Note : 'C:\\PathToMyDownloadFolder\\NewFileName.extension' is not mandatory. By default, the file is not renamed, and the download folder is your local path.
Here's the code adopted from the torchvision library:
import urllib
def download_url(url, root, filename=None):
"""Download a file from a url and place it in root.
Args:
url (str): URL to download file from
root (str): Directory to place downloaded file in
filename (str, optional): Name to save the file under. If None, use the basename of the URL
"""
root = os.path.expanduser(root)
if not filename:
filename = os.path.basename(url)
fpath = os.path.join(root, filename)
os.makedirs(root, exist_ok=True)
try:
print('Downloading ' + url + ' to ' + fpath)
urllib.request.urlretrieve(url, fpath)
except (urllib.error.URLError, IOError) as e:
if url[:5] == 'https':
url = url.replace('https:', 'http:')
print('Failed download. Trying https -> http instead.'
' Downloading ' + url + ' to ' + fpath)
urllib.request.urlretrieve(url, fpath)
If you are ok to take dependency on torchvision library then you also also simply do:
from torchvision.datasets.utils import download_url
download_url('http://something.com/file.zip', '~/my_folder`)
Let me Improve a example with threads in case you want download many files.
import math
import random
import threading
import requests
from clint.textui import progress
# You must define a proxy list
# I suggests https://free-proxy-list.net/
proxies = {
0: {'http': 'http://34.208.47.183:80'},
1: {'http': 'http://40.69.191.149:3128'},
2: {'http': 'http://104.154.205.214:1080'},
3: {'http': 'http://52.11.190.64:3128'}
}
# you must define the list for files do you want download
videos = [
"https://i.stack.imgur.com/g2BHi.jpg",
"https://i.stack.imgur.com/NURaP.jpg"
]
downloaderses = list()
def downloaders(video, selected_proxy):
print("Downloading file named {} by proxy {}...".format(video, selected_proxy))
r = requests.get(video, stream=True, proxies=selected_proxy)
nombre_video = video.split("/")[3]
with open(nombre_video, 'wb') as f:
total_length = int(r.headers.get('content-length'))
for chunk in progress.bar(r.iter_content(chunk_size=1024), expected_size=(total_length / 1024) + 1):
if chunk:
f.write(chunk)
f.flush()
for video in videos:
selected_proxy = proxies[math.floor(random.random() * len(proxies))]
t = threading.Thread(target=downloaders, args=(video, selected_proxy))
downloaderses.append(t)
for _downloaders in downloaderses:
_downloaders.start()
easy as py:
class Downloder():
def download_manager(self, url, destination='Files/DownloderApp/', try_number="10", time_out="60"):
#threading.Thread(target=self._wget_dl, args=(url, destination, try_number, time_out, log_file)).start()
if self._wget_dl(url, destination, try_number, time_out, log_file) == 0:
return True
else:
return False
def _wget_dl(self,url, destination, try_number, time_out):
import subprocess
command=["wget", "-c", "-P", destination, "-t", try_number, "-T", time_out , url]
try:
download_state=subprocess.call(command)
except Exception as e:
print(e)
#if download_state==0 => successfull download
return download_state
TensorFlow makes life easier. file path gives us the location of downloaded file.
import tensorflow as tf
tf.keras.utils.get_file(origin='https://storage.googleapis.com/tf-datasets/titanic/train.csv',
fname='train.csv',
untar=False, extract=False)

Python Downloading Zip Files Damaged

So, I've been trying to make a simple downloader that downloads my zip file.
Code looks like this:
import urllib2
import os
import shutil
url = "https://dl.dropbox.com/u/29251693/CreeperCraft.zip"
file_name = url.split('/')[-1]
u = urllib2.urlopen(url)
f = open('c:\CreeperCraft.zip', 'w+')
meta = u.info()
file_size = int(meta.getheaders("Content-Length")[0])
print "Downloading: %s Bytes: %s" % (file_name, file_size)
file_size_dl = 0
block_sz = 8192
while True:
buffer = u.read(block_sz)
if not buffer:
break
file_size_dl += len(buffer)
f.write(buffer)
status = r"%10d [%3.2f%%]" % (file_size_dl, file_size_dl * 100. / file_size)
status = status + chr(8)*(len(status)+1)
print status,
f.close()
And the problem is, it downloads the file to the correct path, but when I open the file, its damaged, only 1 picture appears and when you click on it, it says File Damaged.
Please help.
f = open('c:\CreeperCraft.zip', 'wb+')
You are using "w+" as flag, Python opens the file in text mode:
Python on Windows makes a distinction between text and binary files;
the end-of-line characters in text files are automatically altered
slightly when data is read or written. This behind-the-scenes
modification to file data is fine for ASCII text files, but it’ll
corrupt binary data like that in JPEG or EXE files.
http://docs.python.org/tutorial/inputoutput.html#reading-and-writing-files
Also, note that you should escape the backslash or use raw strings, therefore use open('c:\\CreeperCraft.zip', 'wb+').
I also would recommend that you do not copy raw byte strings by hand, but use shutil.copyfileobj - it makes your code more compact and easier to understand. I also like to use the with statement that automatically cleans up resources (i.e. that closes files:
import urllib2, shutil
url = "https://dl.dropbox.com/u/29251693/CreeperCraft.zip"
with urllib2.urlopen(url) as source, open('c:\CreeperCraft.zip', 'w+b') as target:
shutil.copyfileobj(source, target)
import posixpath
import sys
import urlparse
import urllib
url = "https://dl.dropbox.com/u/29251693/CreeperCraft.zip"
filename = posixpath.basename(urlparse.urlsplit(url).path)
def print_download_status(block_count, block_size, total_size):
sys.stderr.write('\r%10s bytes of %s' % (block_count*block_size, total_size))
filename, headers = urllib.urlretrieve(url, filename, print_download_status)

Categories