Scraping product URLs under specific ZIP Code - python

I'm trying to scrape the product links under ZIP code 08041. I have written the code to scrape the products without ZIP code but don't know how to scrape and send the request fror the products under 08041?
Here is my code:
import requests
import random
import time
from bs4 import BeautifulSoup
import wget
import csv
from fp.fp import FreeProxy
def helloworld(url):
r = requests.get(url)
print ('Status',r.status_code)
#time.sleep(8)
soup = BeautifulSoup(r.content,'html.parser')
post = soup.find_all('a',"name")
for href in post:
if ( href.get('href')[1] == 'p'):
href = href.get('href')
print (href)
def page_counter():
url1 = "https://soysuper.com/c/aperitivos#products"
print (url1,'\n')
helloworld(url1)
page_counter()

You can use the back-end end-points to mimic a request with a given zip code.
Note: The cookie is hard-coded but valid for a year.
Here's how:
import requests
headers = {
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.105 Safari/537.36",
"X-Requested-With": "XMLHttpRequest",
"Cookie": "soysuper=eyJjYXJ0IjoiNjA2NWNkMzg5ZDI5YzkwNDU1NjI3MzYzIiwiZXhwaXJlcyI6MTY0ODg0MTMzOSwib3JpZCI6IkM2NzgwOUYyLTkyRUYtMTFFQi04NjNELTgzMTBCMUUwMTM2NiIsInNtIjoiIiwidXVpZCI6IkIwQjYxQzRFLTkyRUYtMTFFQi05MjRCLTA5MTFCMUUwMTM2NiIsIndoIjpbIjU0MDQ5MjEwMDk1Y2ZhNTQ2YzAwMDAwMCIsIjRmZjMwZTZhNTgzMmU0OGIwMjAwMDAwMCIsIjU5Y2JhZmE2OWRkNGU0M2JmMzIwODM0MiIsIjRmMzEyNzU4ZTNjNmIzMDAzMjAwMDAwMCIsIjVhMTZmNjdhMjUwOGMxNGFiMzE0OTY4MyIsIjYwMjQxNTEzNzIyZDZhNTZkNDZlMjhmNyIsIjRmZjMwZTJkYzI3ZTk1NTkwMjAwMDAwMSIsIjU5ZjcxYTZlNjI4YWIwN2UyYjJjZmJhMSIsIjU5Y2JhZjNjOWRkNGU0M2JmMzIwODM0MSIsIjVhMGU0NDFhNTNjOTdiM2UxNDYyOGEzNiIsIjRmMmJiZmI3ZWJjYjU1OGM3YjAwMDAwMCIsIjYwNDExZjJlNzIyZDZhMTEyZDVjYTNlYiIsIjViMWZmZjAyNzI1YTYxNzBjOTIxMjc0MSIsIjVlNzk2NWUwZDc5MTg3MGU0NTA1MGMwMCIsIjVkMTI0NDQ2OWRkNGU0NGFkMDU3MmMxMSJdLCJ6aXAiOiIwODA0MSJ9--166849121eece159a6fdb0c0fe8341032321d9b1;"
}
with requests.Session() as connection:
r = connection.get("https://soysuper.com/supermarket?zipcode=08041", headers=headers)
headers["Request-Id"] = r.headers["Next-Request-Id"]
headers["Referer"] = "https://soysuper.com/c/aperitivos"
products_data = connection.get("https://soysuper.com/c/aperitivos?products=1&page=1", headers=headers).json()
print(products_data["products"]["total"])
Output: Total number of products for 08041 zip code.
2923
What you're effectivly getting is a JSON with all the product data for a given page. This is what it looks like in the Network tab.
Do notice the pager key. Use it to "paginate" the API and get more product info.

Related

webscraping python not showing all tags

I'm new to webscraping. I was trying to make a script that gets data from a balance sheet (here the site: https://www.sec.gov/ix?doc=/Archives/edgar/data/320193/000032019320000010/a10-qq1202012282019.htm). The problem is getting the data: when I watch at the source code in my browser, I'm able to find the tag and the correct value. Once I write down a script with bs4, I don't get anything.
I'm trying to get informations form the balance sheet: Products, Services, Cost of sales... and the data contained in the table 1. (I'm sorry, but I can't post the image. Anyway is the first table you see scrolling down).
Here's my code.
from bs4 import BeautifulSoup
from urllib.request import Request, urlopen
url = "https://www.sec.gov/ix?doc=/Archives/edgar/data/320193/000032019320000010/a10-qq1202012282019.htm"
req = Request(url, headers={'User-Agent': 'Mozilla/5.0'})
read_data = urlopen(req).read()
soup_data = BeautifulSoup(read_data,"lxml")
names = soup_data.find_all("td")
for name in names:
print(name)
Thanks for your time.
Try this URL:
Also include the headers to get the data.
import requests
from bs4 import BeautifulSoup
url = "https://www.sec.gov/Archives/edgar/data/320193/000032019320000010/a10-qq1202012282019.htm"
headers = {"User-agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36"}
req = requests.get(url, headers=headers)
soup_data = BeautifulSoup(req.text,"lxml")
You will be able to find the data you need.

site data not populated as browser, despite rendering with html-requests

I am experimenting with html-requests on various sites,
and I am having trouble extracting the price of a stock on this particular site:
https://www.morningstar.com/stocks/xnys/BABA/quote
I am using html-requests, and using html.render to render javascript.
Despite this, the data doesn't seem to be populated as it is within the browser.
from requests_html import HTMLSession
import requests_html
from bs4 import BeautifulSoup as bs
user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36'
requests_html.DEFAULT_USER_AGENT = user_agent
def get_request(ticker):
session = HTMLSession()
print(url)
res = session.get(url)
try:
res.raise_for_status()
except ValueError as e:
raise('Dead link')
return res
def mstar():
url = 'https://www.morningstar.com/stocks/xnys/BABA/quote'
res = get_requesturl)
res.html.render()
price = res.html.find('div#message-box-price.message-partial.fill.up')[0].text
print(price)
price = res.html.find('div.message-partial.fill.up')[0].text
print(price)
change = res.html.find('div#message-box-percentage')[0].text
print(change)
The Expected outcome is this data:
262.20
4.26 | 1.65%
However,
either I am just getting back symbols:
- or % but no actual prices.
Any suggestions?
Thank you.
The data is generated by the JSON API and then dynamically inserted into the website via JavaScript, hence python requests cannot see it. You can verify it yourself by doing a curl https://www.morningstar.com/stocks/xnys/baba/quote and trying to find the 1.65% on it -- it is not there, simply because it is not in the HTML source.
I would suggest to use selenium instead, and parse the data as follows:
elements = driver.find_element(By.ID, "div")
for element in elements:
print element.text
print element.get_attribute('message-box-price.message-partial.fill.up')

Unable to parse a rating information from a webpage using requests

I tried to scrape a certain information from a webpage but failed miserably. The text I wish to grab is available in the page source but I still can't fetch it. This is the site address. I'm after the portion visible in the image as Not Rated.
Relevant html:
<div class="subtext">
Not Rated
<span class="ghost">|</span> <time datetime="PT188M">
3h 8min
</time>
<span class="ghost">|</span>
Drama,
Musical,
Romance
<span class="ghost">|</span>
<a href="/title/tt0150992/releaseinfo?ref_=tt_ov_inf" title="See more release dates">18 June 1999 (India)
</a> </div>
I've tried with:
import requests
from bs4 import BeautifulSoup
link = "https://www.imdb.com/title/tt0150992/?ref_=ttfc_fc_tt"
with requests.Session() as s:
s.headers['User-Agent'] = "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36"
r = s.get(link)
soup = BeautifulSoup(r.text,"lxml")
rating = soup.select_one(".titleBar .subtext").next_element
print(rating)
I get None using the script above.
Expected output:
Not Rated
How can I get the rating from that webpage?
If you want to get correct version of HTML page, specify Accept-Language http header:
import requests
from bs4 import BeautifulSoup
link = "https://www.imdb.com/title/tt0150992/?ref_=ttfc_fc_tt"
with requests.Session() as s:
s.headers['User-Agent'] = "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36"
s.headers['Accept-Language'] = 'en-US,en;q=0.5' # <-- specify also this!
r = s.get(link)
soup = BeautifulSoup(r.text,"lxml")
rating = soup.select_one(".titleBar .subtext").next_element
print(rating)
Prints:
Not Rated
There is a better way to getting info on the page. If you dump the html content returned by the request.
import requests
from bs4 import BeautifulSoup
link = "https://www.imdb.com/title/tt0150992/?ref_=ttfc_fc_tt"
with requests.Session() as s:
s.headers['User-Agent'] = "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36"
r = s.get(link)
soup = BeautifulSoup(r.text,"lxml")
with open("response.html", "w", encoding=r.encoding) as file:
file.write(r.text)
you will find a element <script type="application/ld+json"> which contains all the information about the movie.
Then, you simply get the element text, parse it as json, and use the json to extract the info you wanted.
here is a working example
import json
import requests
from bs4 import BeautifulSoup
link = "https://www.imdb.com/title/tt0150992/?ref_=ttfc_fc_tt"
with requests.Session() as s:
s.headers['User-Agent'] = "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36"
r = s.get(link)
soup = BeautifulSoup(r.text,"lxml")
movie_data = soup.find("script", attrs={"type": "application/ld+json"}).next # Find the element <script type="application/ld+json"> and get it's content
movie_data = json.loads(movie_data) # parse the data to json
content_rating = movie_data["contentRating"] # get rating
IMDB is one of those webpages that makes it incredible easy to do webscraping and I love it. So what they do to make it easy for webscrapers is to put a script in the top of the html that contains the whole movie object in the format of JSON.
So to get all the relevant information and organize it you simply need to get the content of that single script tag, and convert it to JSON, then you can simply ask for the specific information like with a dictionary.
import requests
import json
from bs4 import BeautifulSoup
#This part is basically the same as yours
link = "https://www.imdb.com/title/tt0150992/?ref_=ttfc_fc_tt"
r = requests.get(link)
soup = BeautifulSoup(r.content,"lxml")
#Why not get the whole json element of the movie?
script = soup.find('script', {"type" : "application/ld+json"})
element = json.loads(script.text)
print(element['contentRating'])
#Outputs "Not Rated"
# You can also inspect te rest of the json it has all the relevant information inside
#Just -> print(json.dumps(element, indent=2))
Note:
Headers and session are not necessary in this example.

How can I parse long web pages with beautiful soup?

I have been using following code to parse web page in the link https://www.blogforacure.com/members.php. The code is expected to return the links of all the members of the given page.
from bs4 import BeautifulSoup
import urllib
r = urllib.urlopen('https://www.blogforacure.com/members.php').read()
soup = BeautifulSoup(r,'lxml')
headers = soup.find_all('h3')
print(len(headers))
for header in headers:
a = header.find('a')
print(a.attrs['href'])
But I get only the first 10 links from the above page. Even while printing the prettify option I see only the first 10 links.
The results are dynamically loaded by making AJAX requests to the https://www.blogforacure.com/site/ajax/scrollergetentries.php endpoint.
Simulate them in your code with requests maintaining a web-scraping session:
from bs4 import BeautifulSoup
import requests
url = "https://www.blogforacure.com/site/ajax/scrollergetentries.php"
with requests.Session() as session:
session.headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36'}
session.get("https://www.blogforacure.com/members.php")
page = 0
members = []
while True:
# get page
response = session.post(url, data={
"p": str(page),
"id": "#scrollbox1"
})
html = response.json()['html']
# parse html
soup = BeautifulSoup(html, "html.parser")
page_members = [member.get_text() for member in soup.select(".memberentry h3 a")]
print(page, page_members)
members.extend(page_members)
page += 1
It prints the current page number and the list of members per page accumulating member names into a members list. Not posting what it prints since it contains names.
Note that I've intentionally left the loop endless, please figure out the exit condition. May be when response.json() throws an error.

Can't extract the text and find all by BeautifulSoup

I want to extract the all the available items in the équipements, but I can only get the first four items, and then I got '+ plus'.
import urllib2
from bs4 import BeautifulSoup
import re
import requests
headers = {'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6'}
url = 'https://www.airbnb.fr/rooms/8261637?s=bAMrFL5A'
req = urllib2.Request(url = url, headers = headers)
html = urllib2.urlopen(req)
bsobj = BeautifulSoup(html.read(),'lxml')
b = bsobj.findAll("div",{"class": "row amenities"})
for the result of b, it does not return all the list inside the tag.
And for the last one of it is '+ plus', looks like as following.
<span data-reactid=".mjeft4n4sg.0.0.0.0.1.8.1.0.0.$1.1.0.0">+ Plus</span></strong></a></div></div></div></div></div>]
This is because data filled up using reactjs after page load. So if you download it via requests you can't see the data.
Instead you have to use selenium web driver, open page and process all the javascripts. Then you can get ccess to all data you expect

Categories