Google Search by Image Script for Local Images [duplicate] - python

This question already has answers here:
How to compose the URL for a reverse Google image search?
(3 answers)
Closed 3 years ago.
I'm searching a script for finding similar images to my local images. Actually, I have searched similar topic on stackoverflow but I could not find any solution or clue for my problem.
The topic which is in following url is similar to my problem but it searches with using texts.
python search with image google images
I think that I must pass my local images to my http request as raw bytes but I did not find how can I do that.
Finally, I tried to upload my local images to the web and search with using urls but this time I faced the following problem:
When I searched this image with its url, google generates this url:
https://www.google.com/search?tbs=sbi:AMhZZivZoXHOHzWl5_1BGnG05Bm1LpdXCjewepYnpAH4Xi-s7fVU0S86XG4MFlP7hYlGUpioWaZSjwBBIRDOXrGL8uum9wurfEZowKDUl_1GMPE8JHOO5vEb_1iMSbkmvqx-sWxbPqeHeW1eeJPDgtjio_1l7sJcvSbIquQOoacs3x1mDiF7OLw0mNA3WdR59dFDZAwlpU9A2cXbk_1RrqcilNOEcf0osSDx6TDtXN9ndN3ZSFF8NQhHVDPRrjqRpETbXpVHtyJiIxTzLeAiSC-POpwwN1I3tutScJISO72ZhLCUMAZ-gAuuaTHiHQq-vJBcAgq_1zfzwrDxncCVaKBlqb-zDHclm_1tc9qAMlIIsuKvGXnOSY9flVL4Nqk6Js8Un7_1P_1MbkgVCOcWRmbKG0E_1Sl_145Xe-las_18k4e0N0Ar9eKWGd5gvO33ai967E1tj8uiBqfjZTDYUC_1UARgU-IedUIU4uTmpLgK2xMBTXbSgLU8LdW5ZmB1p_1Tm7tpyIczoN23B2AJz9tFp1wnVOeCi_1jOcegCMPxw_1pULXDVWmgd_1f1OMX_1OrLl7wq5VZbBnH3ME62tdKCScZySq7_11Rx7zvzf2JTKQ_16jt_1HJ2Nf6mYb77n58TSMOSbxNvlCnT6afbPHN_101-Xrb2o0QnkESNBMKNwhLg2ZDDgRSgO0gvyzn86FAIR4Eif77PMV0IlEXtaizdveGwCN3upch2XZQpzljgMOUD0ZEfpe_1GxysMuetPZe_12MsYFp2EVW_19oFqTiavEtn2LIcBI1jhow5zWCkwmcNv8Dz80qYTLCRcAaj5l5w2DsdJd8IiufYP0qxKb5pwXbdM0k3-jEQVaWBo_1wK4dohn3UierX63up9YZWNfKNciTjecJ2q69b9xkhtXp_1LWt9Sdi8-xt25FS1XkW6VdVuqhX9-OexZ9G8bV1SgOEHx5GOuCkdsBjqBZ_1Df9wDGLKDX4V9BVvpX_13TLn6YNFtkHR70z_1zaG66rHPun-fWygzsO_1uSmJH5BtcQODEOSJ7jCs_1iSJf--RB339DBzLenbJB_1HUVPiC7Tj0BvbnWtLnY9sElHi5jPprOlqfVa9uQe21eymwXZROi4aWwhByeODCsCfZjjUNoi0M_1pCTva4KW6mlmrWshh9h_1_1kl3Wx7sKpHGBqIY7VJ8pG3kcp7x0YtbPmfxF6J2iKoMzKHyutTx3cn5PJY9kZhOYs5RCs9ejC0Vmw42qdQaivEUB1aQazxRYH-knaGcbANS0p2OacI32X1SrwWoOdodj733y5_1jJi2soZi4COkUjG_18_1c028sLlBkdVkedcq8DXbUEcQB5jIQPx1115aZqdn8SzSLGxLhowIlVxq6kLuyXuLJy72kArT91Rol2v5jHFxapFjrNuDgwdirVQQIsbx_1jXzgTVPdhYV08eFdpnVnsVu3OaUNZPZO8gsSs9A
I expected a url like google.com/search?url={image_url} but it isn't. Hence, I cannot generate a script for searching my local images.
How can I solve my problem? Thanks for your help.

Use the following url
https://www.google.com/searchbyimage?&image_url=
and concatenate your image URL to it.

Related

Finding Image for Google compute Instance

I am attempting to find the Image for an instance on Google Cloud Platform using the Google Python SDK. So far, I can get the instance, get the disks for the instance, find the boot disk, and extract the sourceImage and sourceImageId fields from there. However, since my instance and disk are in a different project from the image, I'm having trouble getting the actual image object.
The disk's sourceImage is https://www.googleapis.com/compute/v1/projects/centos-cloud/global/images/centos-7-v20200420 and its sourceImageId is 2742482894347998968.
My question is similar to a previous question (GCP Python SDK - Get data from the googleapi url's) except that I would like to find the best-practice method. The lone answer on that question recommends using the URLs in the metadata to get the related object but there is a comment that indicates that the URLs in object metadata are not real URLs which makes me think they should not be used directly. All I need is the project (centos-cloud in this case) for the image and I can use the sourceImageId to get the image object. I could extract the project from the URL pretty easily but that seems fragile. I looked through the documentation to see if there was a built in function to extract the project or get the image using that URL, but I didn't find one.
Is there something I have overlooked? Or is this not a supported workflow? Or did I misinterpret John Hanley's comment and I should just use that URL to pull the image?
TL;DR: Given only a disk's sourceImage (eg. https://www.googleapis.com/compute/v1/projects/centos-cloud/global/images/centos-7-v20200420) and sourceImageId (eg. 2742482894347998968), how can I programatically get the Image object from the API?
google-api-python-client==2.45.0
Python 3.10

How do you download Google Image Search images with Python in 2022?

I'm working on a project which requires downloading the first n results from Google Images given a query term, and I'm wondering how to do this. It seems that they deprecated their API recently, and I haven't been able to find a good up-to-date answer. Ultimately, I want to
Enter query term
Save the URLs for the first n images in a txt file (example)
Download the images from those URLs
I have seen similar solutions that use Selenium but I was hoping to use Requests instead. I'm not very familiar with HTML parsing, but have used beautifulsoup before! Any help is greatly appreciated. I'm currently using Python 3.8.

Download images from google drive [duplicate]

This question already has an answer here:
Downloading Images from Google Drive
(1 answer)
Closed 6 years ago.
I have multiple google drive images url in the text file, I want to download each image from its url, Here catch is I want to download and save images to it's original name.
Here is the reference
Can anyone help me with the solution
Alternate Solution:
I have found out one solution to make it download images
Original URL:-
https://drive.google.com/open?id=0BwJzkr_gZEA0d1h2dTN6MndvdkE
Convert it to:-
https://drive.google.com/uc?export=download&id=0BwJzkr_gZEA0d1h2dTN6MndvdkE
After that add this url into IDM you will able to download image with original name.
Hope that will help.
Have you try to do it throught Google Drive API :
here to create a simple script and here for endpoints

how to use python download available image from http://..../*.jpg? [duplicate]

This question already has an answer here:
How to use urllib to download image from web
(1 answer)
Closed 7 years ago.
I use the following code to download the image 14112758275517_800X533.jpg.
The problem is that I cannot open the 14112758275517_800X533.jpg saved as G:\\image.jpg because the
Windows photo viewer was unable to open the picture, as the file may be corrupted, damaged or too large
import urllib
imageurl="http://img.vogue.com.cn/userfiles/201409/14112758275517_800X533.jpg"
pic_name = "G:\\image.jpg"
urllib.urlretrieve(imageurl, pic_name)
How can I download the image so that it is readable?
I think you cannot. It is likely a website problem, not something in the code you posted.
I am given a 403 Forbidden when trying to type the url you gave even with a navigator.
As an example, the following works and only the url has changed :
import urllib
imageurl="https://www.python.org/static/img/python-logo.png"
pic_name = "./image.png"
urllib.urlretrieve(imageurl, pic_name)
However, you might want to check other topics about the subjet as there are more advanced techniques to download images from the web such as https://stackoverflow.com/a/8389368/2549230

How to back up whole webpage include picture with python? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to download a file in python
I'm playing with Python for doing some crawling stuff. I do know there is urllib.urlopen("http://XXXX") That can help me to get the html for target website. However, The link to the original image in that webpage will usually make the image in the backup page unavailable. I am wondering is there a way that can also save the image in the local space, then we can read the full content on the website without internet connection. It's like back up the whole webpage, but I'm not sure is there any way to do that in Python. Also, if it can get rid of the advertisement stuff, it will be more awesome though. Thanks.
If you're looking to backup a single webpage, you're well on your way.
Since you mention crawling, if you want to backup an entire website, you'll need to do some real crawling and you'll need scrapy for that.
There are several ways of downloading files off the interwebs, just see these questions:
Python File Download
How to- download a file in python
Automate file download from http using python
Hope this helps

Categories