soup.find returning "none" only sometimes? - python

I am scraping an Amazon product page and using Beautiful Soup to find the product name and price. For some reason, the "title" variable will return sometimes and other times I will get the error, "'NoneType' object has no attribute 'get_text'"
import requests
from bs4 import BeautifulSoup
URL = 'https://www.amazon.com/Lenovo-ThinkPad-i5-10210U-i7-7500U-Wireless/\
dp/B08BYZD4H9/ref=sr_1_2_sspa?dchild=1&keywords=thinkpad&qid=1595377662&sr=8\
-2-spons&psc=1&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUEyMVhTU1BOODg5TlgmZW5jcnlwdGVkS\
WQ9QTAzMTc5MDFMNjhGMUE0VlRHT1gmZW5jcnlwdGVkQWRJZD1BMDY3MDc3MzJPQzc2QkI5UlcwSUE\
md2lkZ2V0TmFtZT1zcF9hdGYmYWN0aW9uPWNsaWNrUmVkaXJlY3QmZG9Ob3RMb2dDbGljaz10cnVl'
page = requests.get(URL, headers=headers)
soup = BeautifulSoup(page.content, 'html.parser')
title = soup.find(id="productTitle").get_text()
price = soup.find(id="priceblock_ourprice").get_text()
converted_price = int(price[1:6].replace(',',''))
print(converted_price)
print(title)

Try to specify more HTTP headers, for example User-Agent and Accept-Language. Also, change the parser to lxml or html5lib.
import requests
from bs4 import BeautifulSoup
headers = {
'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0',
'Accept-Language': 'en-US,en;q=0.5'
}
URL = 'https://www.amazon.com/Lenovo-ThinkPad-i5-10210U-i7-7500U-Wireless/dp/B08BYZD4H9/ref=sr_1_2_sspa?dchild=1&keywords=thinkpad&qid=1595377662&sr=8-2-spons&psc=1&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUEyMVhTU1BOODg5TlgmZW5jcnlwdGVkSWQ9QTAzMTc5MDFMNjhGMUE0VlRHT1gmZW5jcnlwdGVkQWRJZD1BMDY3MDc3MzJPQzc2QkI5UlcwSUEmd2lkZ2V0TmFtZT1zcF9hdGYmYWN0aW9uPWNsaWNrUmVkaXJlY3QmZG9Ob3RMb2dDbGljaz10cnVl'
page = requests.get(URL, headers=headers)
soup = BeautifulSoup(page.content, 'lxml') # <-- change to `lxml` or `html5lib`
title = soup.find(id="productTitle").get_text(strip=True)
price = soup.find(id="priceblock_ourprice").get_text(strip=True)
converted_price = int(price[1:6].replace(',',''))
print(converted_price)
print(title)
Prints (in my testing always):
1049
2020 Lenovo ThinkPad E15 15.6 Inch FHD 1080P Laptop| Intel 4-Core i5-10210U (Beats i7-7500U)| 16GB RAM| 1TB SSD (Boot) + 500GB HDD| FP Reader| Win10 Pro+ NexiGo Wireless Mouse Bundle

You are getting this error 'NoneType' object has no attribute 'get_text' because the webpage's data is changing and there is no attribute having id="productTitle" or there is no attribute having id="priceblock_ourprice".
Put some debug statements like this and you will know why exactly this error is coming.
soup = BeautifulSoup(page.content, 'html.parser')
print(soup)
title_soup = soup.find(id="productTitle")
print(title_soup) # <- this might print None
print(title_soup.get_text())
price_soup = soup.find(id="priceblock_ourprice")
print(price_soup) # <- this might print None
print(price_soup.get_text())

Related

Extracting company name and other information inside all urls present in a webpage using beautifulsoup

<li>
<strong>Company Name</strong>
":"
<span itemprop="name">PT ERA MURNI BUSANA</span>
</li>
In the above HTML code, I am trying to extract the company name which is PT ERA MURNI BUSANA.
if I use a single test link, I can get the name using the single line code I wrote:
soup.find_all("span",attrs={"itemprop":"name"})[3].get_text()
But I want to extract the information from all such pages present in a single web page.
So I write the for loop but it is fetch the details. I am pasting the part of the code that I have been trying which needs some modification.
Code:-
for link in supplierlinks: #links have been extracted and merged with the base url
r=requests.get(link,headers=headers)
soup=BeautifulSoup(r.content,'lxml')
companyname=soup.find_all("span",attrs={"itemprop":"name"})[2].get_text()
Output looks like:
{'Company Name': 'AIRINDO SAKTI GARMENT PT'}
{'Company Name': 'Garments'}
{'Company Name': 'Garments'}
Instead of the garments popping up in the output, I need the company name. How do I modify the code within for loop?
Link:https://idn.bizdirlib.com/node/5290
Try this code:
import requests
from bs4 import BeautifulSoup
headers = {'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:32.0) Gecko/20100101 Firefox/32.0'}
r = requests.get('https://idn.bizdirlib.com/node/5290',headers=headers).text
soup = BeautifulSoup(r,'html5lib')
print(soup.find_all("span",attrs={"itemprop":"name"})[-1].get_text())
div = soup.find('div',class_ = "content clearfix")
li_tags = div.div.find_all('fieldset')[1].find_all('div')[-1].ul.find_all('li')
supplierlinks = []
for li in li_tags:
try:
supplierlinks.append("https://idn.bizdirlib.com/"+li.a['href'])
except:
pass
for link in supplierlinks:
r = requests.get(link,headers=headers).text
soup = BeautifulSoup(r,'html5lib')
print(soup.find_all("span", attrs={"itemprop": "name"})[-1].get_text())
Output:
PT ERA MURNI BUSANA
PT ELKA SURYA ABADI
PT EMPANG BESAR MAKMUR
PT EMS
PT ENERON
PT ENPE JAYA
PT ERIDANI TOUR AND TRAVEL
PT EURO ASIA TRADE & INDUSTRY
PT EUROKARS CHRISDECO UTAMA
PT EVERAGE VALVES METAL
PT EVICO
This code prints the company names of all the links on the page
You can select sibling element to element <strong> that contains the text "Company Name" (also, don't forget to set User-Agent http header):
import requests
from bs4 import BeautifulSoup
url = 'https://idn.bizdirlib.com/node/5290'
headers = {'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:81.0) Gecko/20100101 Firefox/81.0'}
soup = BeautifulSoup(requests.get(url, headers=headers).content, 'html.parser')
print( soup.select_one('strong:contains("Company Name") + *').text )
Prints:
PT ERA MURNI BUSANA
EDIT: To get contact person:
import requests
from bs4 import BeautifulSoup
url = 'https://idn.bizdirlib.com/node/5290'
headers = {'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:81.0) Gecko/20100101 Firefox/81.0'}
soup = BeautifulSoup(requests.get(url, headers=headers).content, 'html.parser')
print( soup.select_one('strong:contains("Company Name") + *').text )
print( soup.select_one('strong:contains("Contact") + *').text )
Prints:
PT ERA MURNI BUSANA
Mr. Yohan Kustanto

Why can't I scrape Amazon products by BeautifulSoup?

I am trying to scrape the heading of this Amazon listing. The code I wrote is working for some other Amazon listings, but not working for the url mentioned in the code below.
Here is the python code I've tried:
import requests
from bs4 import BeautifulSoup
url="https://www.amazon.in/BULLMER-Cotton-Printed-T-shirt-Multicolour/dp/B0892SZX7F/ref=sr_1_4?c=ts&dchild=1&keywords=Men%27s+T-Shirts&pf_rd_i=1968024031&pf_rd_m=A1VBAL9TL5WCBF&pf_rd_p=8b97601b-3643-402d-866f-95cc6c9f08d4&pf_rd_r=EPY70Y57HP1220DK033Y&pf_rd_s=merchandised-search-6&qid=1596817115&refinements=p_72%3A1318477031&s=apparel&sr=1-4&ts_id=1968123031"
headers = {"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0"}
page = requests.get(url, headers=headers)
print(page.status_code)
soup = BeautifulSoup(page.content, "html.parser")
#print(soup.prettify())
title = soup.find(id = "productTitle")
if title:
title = title.get_text()
else:
title = "default_title"
print(title)
Output:
200
default_title
html code from inspector tools:
<span id="productTitle" class="a-size-large product-title-word-break">
BULLMER Mens Halfsleeve Round Neck Printed Cotton Tshirt - Combo Tshirt - Pack of 3
</span>
First, As others have commented, use a proxy service. Second in order to go amazon product page if you have an asin that's enough.
Amazon follows this url pattern for all product pages.
https://www.amazon.(com/in/fr)/dp/<asin>
import requests
from bs4 import BeautifulSoup
url="https://www.amazon.in/dp/B0892SZX7F"
headers = {'User-Agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36'}
page = requests.get(url, headers=headers)
print(page.status_code)
soup = BeautifulSoup(page.content, "html.parser")
title = soup.find("span", {"id":"productTitle"})
if title:
title = title.get_text(strip=True)
else:
title = "default_title"
print(title)
Output:
200
BULLMER Mens Halfsleeve Round Neck Printed Cotton Tshirt - Combo Tshirt - Pack of 3
this worked fine for me:
import requests
from bs4 import BeautifulSoup
url="https://www.amazon.in/BULLMER-Cotton-Printed-T-shirt-Multicolour/dp/B0892SZX7F/ref=sr_1_4?c=ts&dchild=1&keywords=Men%27s+T-Shirts&pf_rd_i=1968024031&pf_rd_m=A1VBAL9TL5WCBF&pf_rd_p=8b97601b-3643-402d-866f-95cc6c9f08d4&pf_rd_r=EPY70Y57HP1220DK033Y&pf_rd_s=merchandised-search-6&qid=1596817115&refinements=p_72%3A1318477031&s=apparel&sr=1-4&ts_id=1968123031"
headers = {"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0"}
http_proxy = "http://10.10.1.10:3128"
https_proxy = "https://10.10.1.11:1080"
ftp_proxy = "ftp://10.10.1.10:3128"
proxyDict = {
"http" : http_proxy,
"https" : https_proxy,
"ftp" : ftp_proxy
}
page = requests.get(url, headers=headers)
print(page.status_code)
soup = BeautifulSoup(page.content, "lxml")
#print(soup.prettify())
title = soup.find(id = "productTitle")
if title:
title = title.get_text()
else:
title = "default_title"
print(title)

BeautifulSoup not finding meta tag information

All three titles return 'None'. However, when I view the page source, I can clearly see twitter:title, og:title and og:description clearly exists.
url = 'https://www.vox.com/culture/2018/8/3/17644464/christopher-robin-review-pooh-bear-winnie'
response = requests.get(url)
soup = BeautifulSoup(response.text, "lxml")
title = soup.find("meta", property="twitter:title")
title2 = soup.find("meta", property="og:title")
title3 = soup.find("meta", property="og:description")
print("TITLE: "+str(title))
print("TITLE2: "+str(title2))
print("TITLE3: "+str(title3))
soup.find("meta", property="twitter:title") must be soup.find("meta", {"name": "twitter:title"}) (it's a name, not a property). The other two lines work fine for me.
You need to specify User-Agent in headers, also twitter:title is in name attribute:
from bs4 import BeautifulSoup
import requests
url = 'https://www.vox.com/culture/2018/8/3/17644464/christopher-robin-review-pooh-bear-winnie'
headers = {'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:61.0) Gecko/20100101 Firefox/61.0'}
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, "lxml")
title1 = soup.select_one('meta[name=twitter:title]')['content']
title2 = soup.select_one('meta[property=og:title]')['content']
title3 = soup.select_one('meta[property=og:description]')['content']
print("TITLE1: "+str(title1))
print("TITLE2: "+str(title2))
print("TITLE3: "+str(title3))
Prints:
TITLE1: Christopher Robin is a corporate cash-in, but it fakes sincerity better than most
TITLE2: Christopher Robin is a corporate cash-in, but it fakes sincerity better than most
TITLE3: Winnie the Pooh and pals return to give their old friend a pep talk in a movie overshadowed by the company that made it.

Shortened link not working with BeautifulSoup Python

This code gets the information from the site perfectly fine:
url = 'https://www.vogue.com/article/mamma-mia-2-here-we-go-again-review?mbid=social_twitter'
headers = {'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:61.0) Gecko/20100101 Firefox/61.0'}
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, "lxml")
title = soup.find("meta", {"name": "twitter:title"})
title2 = soup.find("meta", property="og:title")
title3 = soup.find("meta", property="og:description")
print("TITLE: "+str(title['content']))
print("TITLE2: "+str(title2['content']))
print("TITLE3: "+str(title3['content']))
However, when I replace the url with this shortened link it returns:
print("TITLE: "+str(title['content']))
TypeError: 'NoneType' object has no attribute '__getitem__'
The url-shortener sends a meta-refresh to redirect to desired page. This code should help:
from bs4 import BeautifulSoup
import requests
import re
shortened_url = '<YOUR SHORTENED URL>'
headers = {'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:61.0) Gecko/20100101 Firefox/61.0'}
response = requests.get(shortened_url, headers=headers)
soup = BeautifulSoup(response.text, "lxml")
while True:
# is meta refresh there?
if soup.select_one('meta[http-equiv=refresh]'):
refresh_url = re.search(r'url=(.*)', soup.select_one('meta[http-equiv=refresh]')['content'], flags=re.I)[1]
response = requests.get(refresh_url, headers=headers)
soup = BeautifulSoup(response.text, "lxml")
else:
break
title = soup.find("meta", {"name": "twitter:title"})
title2 = soup.find("meta", property="og:title")
title3 = soup.find("meta", property="og:description")
print("TITLE: "+str(title['content']))
print("TITLE2: "+str(title2['content']))
print("TITLE3: "+str(title3['content']))
Prints:
TITLE: Mamma Mia! Here We Go Again Is the Only Good Thing About This Summer - Vogue
TITLE2: Mamma Mia! Here We Go Again Is the Only Good Thing About This Summer
TITLE3: Is it possible to change your country of origin to a movie sequel?

Beautifulsoup parsing error

I am trying to extract some information about an App on Google Play and BeautifulSoup doesn't seem to work.
The link is this(say):
https://play.google.com/store/apps/details?id=com.cimaxapp.weirdfacts
My code:
url = "https://play.google.com/store/apps/details?id=com.cimaxapp.weirdfacts"
r = requests.get(url)
html = r.content
soup = BeautifulSoup(html)
l = soup.find_all("div", { "class" : "document-subtitles"})
print len(l)
0 #How is this 0?! There is clearly a div with that class
I decided to go all in, didn't work either:
i = soup.select('html body.no-focus-outline.sidebar-visible.user-has-no-subscription div#wrapper.wrapper.wrapper-with-footer div#body-content.body-content div.outer-container div.inner-container div.main-content div div.details-wrapper.apps.square-cover.id-track-partial-impression.id-deep-link-item div.details-info div.info-container div.info-box-top')
print i
What am I doing wrong?
You need to pretend to be a real browser by supplying the User-Agent header:
import requests
from bs4 import BeautifulSoup
url = "https://play.google.com/store/apps/details?id=com.cimaxapp.weirdfacts"
r = requests.get(url, headers={
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.94 Safari/537.36"
})
html = r.content
soup = BeautifulSoup(html, "html.parser")
title = soup.find(class_="id-app-title").get_text()
rating = soup.select_one(".document-subtitle .star-rating-non-editable-container")["aria-label"].strip()
print(title)
print(rating)
Prints the title and the current rating:
Weird Facts
Rated 4.3 stars out of five stars
To get the additional information field values, you can use the following generic function:
def get_info(soup, text):
return soup.find("div", class_="title", text=lambda t: t and t.strip() == text).\
find_next_sibling("div", class_="content").get_text(strip=True)
Then, if you do:
print(get_info(soup, "Size"))
print(get_info(soup, "Developer"))
You will see printed:
1.4M
Email email#here.com

Categories