Download images from google drive [duplicate] - python

This question already has an answer here:
Downloading Images from Google Drive
(1 answer)
Closed 6 years ago.
I have multiple google drive images url in the text file, I want to download each image from its url, Here catch is I want to download and save images to it's original name.
Here is the reference
Can anyone help me with the solution
Alternate Solution:
I have found out one solution to make it download images
Original URL:-
https://drive.google.com/open?id=0BwJzkr_gZEA0d1h2dTN6MndvdkE
Convert it to:-
https://drive.google.com/uc?export=download&id=0BwJzkr_gZEA0d1h2dTN6MndvdkE
After that add this url into IDM you will able to download image with original name.
Hope that will help.

Have you try to do it throught Google Drive API :
here to create a simple script and here for endpoints

Related

Download closed Google Sheets by link

I have Google Sheets, which only a few people have access to, the link is also closed, so I can't download a document using a regular download link:
def download_xlsx():
url = 'https://docs.google.com/spreadsheets/d/spreadsheetID/export?format=xlsx'
urllib.request.urlretrieve(url, 'table.xlsx')
I realized that the problem is that there is no access to the document by link, I cannot open this access, since the terms of reference are exactly that.
Maybe someone has already faced a similar task and will be able to suggest a solution?

how to use python download available image from http://..../*.jpg? [duplicate]

This question already has an answer here:
How to use urllib to download image from web
(1 answer)
Closed 7 years ago.
I use the following code to download the image 14112758275517_800X533.jpg.
The problem is that I cannot open the 14112758275517_800X533.jpg saved as G:\\image.jpg because the
Windows photo viewer was unable to open the picture, as the file may be corrupted, damaged or too large
import urllib
imageurl="http://img.vogue.com.cn/userfiles/201409/14112758275517_800X533.jpg"
pic_name = "G:\\image.jpg"
urllib.urlretrieve(imageurl, pic_name)
How can I download the image so that it is readable?
I think you cannot. It is likely a website problem, not something in the code you posted.
I am given a 403 Forbidden when trying to type the url you gave even with a navigator.
As an example, the following works and only the url has changed :
import urllib
imageurl="https://www.python.org/static/img/python-logo.png"
pic_name = "./image.png"
urllib.urlretrieve(imageurl, pic_name)
However, you might want to check other topics about the subjet as there are more advanced techniques to download images from the web such as https://stackoverflow.com/a/8389368/2549230

Google Search by Image Script for Local Images [duplicate]

This question already has answers here:
How to compose the URL for a reverse Google image search?
(3 answers)
Closed 3 years ago.
I'm searching a script for finding similar images to my local images. Actually, I have searched similar topic on stackoverflow but I could not find any solution or clue for my problem.
The topic which is in following url is similar to my problem but it searches with using texts.
python search with image google images
I think that I must pass my local images to my http request as raw bytes but I did not find how can I do that.
Finally, I tried to upload my local images to the web and search with using urls but this time I faced the following problem:
When I searched this image with its url, google generates this url:
https://www.google.com/search?tbs=sbi:AMhZZivZoXHOHzWl5_1BGnG05Bm1LpdXCjewepYnpAH4Xi-s7fVU0S86XG4MFlP7hYlGUpioWaZSjwBBIRDOXrGL8uum9wurfEZowKDUl_1GMPE8JHOO5vEb_1iMSbkmvqx-sWxbPqeHeW1eeJPDgtjio_1l7sJcvSbIquQOoacs3x1mDiF7OLw0mNA3WdR59dFDZAwlpU9A2cXbk_1RrqcilNOEcf0osSDx6TDtXN9ndN3ZSFF8NQhHVDPRrjqRpETbXpVHtyJiIxTzLeAiSC-POpwwN1I3tutScJISO72ZhLCUMAZ-gAuuaTHiHQq-vJBcAgq_1zfzwrDxncCVaKBlqb-zDHclm_1tc9qAMlIIsuKvGXnOSY9flVL4Nqk6Js8Un7_1P_1MbkgVCOcWRmbKG0E_1Sl_145Xe-las_18k4e0N0Ar9eKWGd5gvO33ai967E1tj8uiBqfjZTDYUC_1UARgU-IedUIU4uTmpLgK2xMBTXbSgLU8LdW5ZmB1p_1Tm7tpyIczoN23B2AJz9tFp1wnVOeCi_1jOcegCMPxw_1pULXDVWmgd_1f1OMX_1OrLl7wq5VZbBnH3ME62tdKCScZySq7_11Rx7zvzf2JTKQ_16jt_1HJ2Nf6mYb77n58TSMOSbxNvlCnT6afbPHN_101-Xrb2o0QnkESNBMKNwhLg2ZDDgRSgO0gvyzn86FAIR4Eif77PMV0IlEXtaizdveGwCN3upch2XZQpzljgMOUD0ZEfpe_1GxysMuetPZe_12MsYFp2EVW_19oFqTiavEtn2LIcBI1jhow5zWCkwmcNv8Dz80qYTLCRcAaj5l5w2DsdJd8IiufYP0qxKb5pwXbdM0k3-jEQVaWBo_1wK4dohn3UierX63up9YZWNfKNciTjecJ2q69b9xkhtXp_1LWt9Sdi8-xt25FS1XkW6VdVuqhX9-OexZ9G8bV1SgOEHx5GOuCkdsBjqBZ_1Df9wDGLKDX4V9BVvpX_13TLn6YNFtkHR70z_1zaG66rHPun-fWygzsO_1uSmJH5BtcQODEOSJ7jCs_1iSJf--RB339DBzLenbJB_1HUVPiC7Tj0BvbnWtLnY9sElHi5jPprOlqfVa9uQe21eymwXZROi4aWwhByeODCsCfZjjUNoi0M_1pCTva4KW6mlmrWshh9h_1_1kl3Wx7sKpHGBqIY7VJ8pG3kcp7x0YtbPmfxF6J2iKoMzKHyutTx3cn5PJY9kZhOYs5RCs9ejC0Vmw42qdQaivEUB1aQazxRYH-knaGcbANS0p2OacI32X1SrwWoOdodj733y5_1jJi2soZi4COkUjG_18_1c028sLlBkdVkedcq8DXbUEcQB5jIQPx1115aZqdn8SzSLGxLhowIlVxq6kLuyXuLJy72kArT91Rol2v5jHFxapFjrNuDgwdirVQQIsbx_1jXzgTVPdhYV08eFdpnVnsVu3OaUNZPZO8gsSs9A
I expected a url like google.com/search?url={image_url} but it isn't. Hence, I cannot generate a script for searching my local images.
How can I solve my problem? Thanks for your help.
Use the following url
https://www.google.com/searchbyimage?&image_url=
and concatenate your image URL to it.

Scraping data uri image [duplicate]

This question already has an answer here:
Downloading Image Data URIs from Webpages via BeautifulSoup
(1 answer)
Closed 7 years ago.
I would like to scrape images from a webpage, the problem is the images are included in the source code as Data URI. How do I save them to a file?
(I need to access URI images only from specific scraped Data URI codes)
The image/string is in base64 encoding (even stated in the URI itself!). All you have to do is decode it, then write it to a file.
imageContents = "/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAkGBxQSEhUUE"
myfile = open("image.jpg","w")
myfile.write(imageContents.decode("base64"))
myfile.close()

How to back up whole webpage include picture with python? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to download a file in python
I'm playing with Python for doing some crawling stuff. I do know there is urllib.urlopen("http://XXXX") That can help me to get the html for target website. However, The link to the original image in that webpage will usually make the image in the backup page unavailable. I am wondering is there a way that can also save the image in the local space, then we can read the full content on the website without internet connection. It's like back up the whole webpage, but I'm not sure is there any way to do that in Python. Also, if it can get rid of the advertisement stuff, it will be more awesome though. Thanks.
If you're looking to backup a single webpage, you're well on your way.
Since you mention crawling, if you want to backup an entire website, you'll need to do some real crawling and you'll need scrapy for that.
There are several ways of downloading files off the interwebs, just see these questions:
Python File Download
How to- download a file in python
Automate file download from http using python
Hope this helps

Categories