Web Scrapping with beautiful soup - python

To start, this is my first time using stack overflow!
I started my journey yesterday on python and I'm trying to extract the value of some pages automatically.
This is my code
import requests
from bs4 import BeautifulSoup
url = 'https://www.jpg.store/collection/chilledkongs'
r = requests.get(url)
soup = BeautifulSoup(r.content, 'html.parser')
div = soup.find('div', class_ = 'stat-title')
print(div)
I'm getting nothing and my objective is to get the floor price. Atm is 888

The floor price is loaded via JavaScript from external source. To get it via requests use next example:
import json
import requests
from bs4 import BeautifulSoup
url = "https://www.jpg.store/collection/chilledkongs"
api_url = "https://server.jpgstoreapis.com/collection/{}/floor"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
data = soup.select_one("#__NEXT_DATA__").contents[0]
data = json.loads(data)
policy_id = data["props"]["pageProps"]["collection"]["policy_id"]
data = requests.get(api_url.format(policy_id)).json()
print(data["floor"] / 1_000_000)
Prints:
888.0

As #Bao Huynh Lamn stated, the website is being dynamically generated/rendered using JavaScript.So you can use an automation tool like selenium.
from selenium import webdriver
from webdriver_manager.chrome import ChromeDriverManager
from bs4 import BeautifulSoup
driver = webdriver.Chrome(ChromeDriverManager().install())
url = 'https://www.jpg.store/collection/chilledkongs'
driver.get(url)
soup = BeautifulSoup(driver.page_source, 'html.parser')
driver.close()
for div in soup.find_all('div', class_ = 'stat-title')[-2]:
print(div.text)
Output:
888

Related

Unable to parse href using beautifulsoup Python

I am trying to webscrape the following webpage to get a specific href using BS4. They've just changed the page layout and due to that I am unable to parse it correctly. Hope anyone can help.
Webpage trying to scrape: https://www.temit.co.uk/investor/resources/temit-literature
Tag trying to get: href="https://franklintempletonprod.widen.net/s/c667rc7chx/173-fact-sheet-retail-investor-factsheet"
Currently I am using the following code (BS4), however, I am unable to get the specific element anymore.
url = "https://www.temit.co.uk/investor/resources/temit-literature"
page = requests.get(url) # Requests website
soup = BeautifulSoup(page.content, 'html.parser')
table = soup.find_all('div', attrs={'class':'row ng-star-inserted'})
url_TEM = table[1].find_all('a')[0].get('href')
url_TEM = 'https://www.temit.co.uk' + url_TEM
The url is dynamic that's why I use selenium with bs4 and getting the desired output as follows:
Code:
from bs4 import BeautifulSoup
import time
from selenium import webdriver
driver = webdriver.Chrome('chromedriver.exe')
url = "https://www.temit.co.uk/investor/resources/temit-literature"
driver.get(url)
time.sleep(8)
soup = BeautifulSoup(driver.page_source, 'html.parser')
table_urls = soup.select('table.table-striped tbody tr td a')
for table_url in table_urls:
url = table_url['href']
print(url)
Output:
https://franklintempletonprod.widen.net/s/gd2tmrc8cl/final-temit-annual-report-31-march-2021
https://franklintempletonprod.widen.net/s/lzrcnmhpvr/temit-semi-annual-report-temit-semi-annual-report-30-09-2020
https://www.londonstockexchange.com/stock/TEM/templeton-emerging-markets-investment-trust-plc/analysis
https://franklintempletonprod.widen.net/s/c667rc7chx/173-fact-sheet-retail-investor-factsheethttps://franklintempletonprod.widen.net/s/bdxrtlljxg/temit_manager_update
https://franklintempletonprod.widen.net/s/6djgk6xknx/temit-holdings-report
/content-kid/kid/en-GB/KID-GB0008829292-GB-en-GB.pdf
https://franklintempletonprod.widen.net/s/flblzqxmcg/temit-investor-disclosure-document.pdf-26-04-2021
https://franklintempletonprod.widen.net/s/l5gmdbf6vp/agm-results-announcement
https://franklintempletonprod.widen.net/s/qmkphz9s5s/agm-shareholder-documentation
https://franklintempletonprod.widen.net/s/b258tgpjb7/agm-uk-voting-guide
https://franklintempletonprod.widen.net/s/c5zhbbnxql/agm-nz-voting-guide
https://franklintempletonprod.widen.net/s/shmljbvtjq/temit-annual-report-mar-2019
https://franklintempletonprod.widen.net/s/shmljbvtjq/temit-annual-report-mar-2019
https://franklintempletonprod.widen.net/s/5bjq2qkmh5/temit-annual-report-mar-2018
https://franklintempletonprod.widen.net/s/bnx9mfwlzw/temit-annual-report-mar-2017
https://franklintempletonprod.widen.net/s/rfqc7xrnfn/temit-annual-report-mar-2016
https://franklintempletonprod.widen.net/s/zfzxlflxnq/temit-annual-report-mar-2015
https://franklintempletonprod.widen.net/s/dj9zl8rpcm/temit-annual-report-mar-2014
https://franklintempletonprod.widen.net/s/7xshxmkpnh/temit-annual-report-mar-2013
https://franklintempletonprod.widen.net/s/7gwx2qmcdr/temit-annual-report-mar-2012
https://franklintempletonprod.widen.net/s/drpd7gbvxl/temit-annual-report-mar-2011
https://franklintempletonprod.widen.net/s/2pb2kxkgbl/temit-annual-report-mar-2010
https://franklintempletonprod.widen.net/s/g6pdr9hq2d/temit-annual-report-mar-2009
https://franklintempletonprod.widen.net/s/7pvjf6fhl9/temit-annual-report-mar-2008
https://franklintempletonprod.widen.net/s/lzrcnmhpvr/temit-semi-annual-report-temit-semi-annual-report-30-09-2020
https://franklintempletonprod.widen.net/s/xwvrncvkj2/temit-half-year-report-sept-2019
https://franklintempletonprod.widen.net/s/lbp5ssv8mc/temit-half-year-report-sept-2018
https://franklintempletonprod.widen.net/s/hltddqhqcf/temit-half-year-report-sept-2017
https://franklintempletonprod.widen.net/s/2tlqxxflgn/temit-half-year-report-sept-2016
https://franklintempletonprod.widen.net/s/lbcgztjjkj/temit-half-year-report-sept-2015
https://franklintempletonprod.widen.net/s/2tjxzgbdvx/temit-half-year-report-sept-2014
https://franklintempletonprod.widen.net/s/gzrpjwb7bf/temit-half-year-report-sept-2013
https://franklintempletonprod.widen.net/s/lxhbdrmc8z/temit-half-year-report-sept-2012
https://franklintempletonprod.widen.net/s/zzpxrrrpmc/temit-half-year-report-sept-2011
https://franklintempletonprod.widen.net/s/zjdd2gn5jc/temit-half-year-report-sept-2010
https://franklintempletonprod.widen.net/s/7sbqfxxkrd/temit-half-year-report-sept-2009
https://franklintempletonprod.widen.net/s/pvswpqkdvb/temit-half-year-report-sept-2008-1

BeautifulSoup.select classname not working

I am trying to find tags by CSS class, using BeautifulSoup.
Read documentation and tried different ways but the below code returns new_elem : [].
Could you help me understand what I am doing wrong? Thanks.
import requests
from bs4 import BeautifulSoup
url = "https://solanamonkeysclub.com/#/#mint"
response = requests.get(url)
response.encoding = response.apparent_encoding
soup = BeautifulSoup(response.text, 'html.parser')
new_elems = str(soup.select('.ant-card-body'))
print(f'{"new_elem":10} : {new_elems}')
As the url is dynamic,I use selenium with bs4 and getting the follwing output:
Code:
import requests
from bs4 import BeautifulSoup
import time
from selenium import webdriver
driver = webdriver.Chrome('chromedriver.exe')
url = "https://solanamonkeysclub.com/#/#mint"
driver.get(url)
time.sleep(8)
soup = BeautifulSoup(driver.page_source, 'html.parser')
new_elems = soup.select('.ant-card-body')
for new_elem in new_elems:
print(f'{"new_elem":10} : {new_elem.text}')
OUTPUT:
new_elem : 0
new_elem : 0
Have you looked at the output at all? You should bring up this page in a browser and do a "view source", or do print(response.text) after you fetch it. The page as delivered contains no HTML elements. The entire page is built dynamically using Javascript.
You will need do use something like Selenium to scrape it.

Web scraping several a href

I would like to scrape this page with Python: https://statusinvest.com.br/acoes/proventos/ibovespa.
With this code:
import requests
from bs4 import BeautifulSoup as bs
URL = "https://statusinvest.com.br/acoes/proventos/ibovespa"
page = 1
req = requests.get(URL+str(page))
soup = bs(req.text, 'html.parser')
container = soup.find('div', attrs={'class','list'})
dividends = container.find('a')
for dividend in dividends:
links = dividend.find_all('a')
print(links)
But it doesn't return anything.
Can someone help me please?
Edited: you can see the below updated code to access any data you mentioned in the comment, you can modify according to your needs as all the data on that page is inside data variable.
Updated Code:
import json
import requests
from bs4 import BeautifulSoup as bs
url = "https://statusinvest.com.br"
links = []
req = requests.get(f"{url}/acoes/proventos/ibovespa")
soup = bs(req.content, 'html.parser')
data = json.loads(soup.find('input', attrs={'id': 'result'})["value"])
print("Date Com Data")
for datecom in data["dateCom"]:
print(f"{datecom['code']}\t{datecom['companyName']}\t{datecom['companyNameClean']}\t{datecom['companyId']}\t{datecom['companyId']}\t{datecom['resultAbsoluteValue']}\t{datecom['dateCom']}\t{datecom['paymentDividend']}\t{datecom['earningType']}\t{datecom['dy']}\t{datecom['recentEvents']}\t{datecom['recentEvents']}\t{datecom['uRLClear']}")
print("\nDate Payment Data")
for datePayment in data["datePayment"]:
print(f"{datePayment['code']}\t{datePayment['companyName']}\t{datePayment['companyNameClean']}\t{datePayment['companyId']}\t{datePayment['companyId']}\t{datePayment['resultAbsoluteValue']}\t{datePayment['dateCom']}\t{datePayment['paymentDividend']}\t{datePayment['earningType']}\t{datePayment['dy']}\t{datePayment['recentEvents']}\t{datePayment['recentEvents']}\t{datePayment['uRLClear']}")
print("\nProvisioned Data")
for provisioned in data["provisioned"]:
print(f"{provisioned['code']}\t{provisioned['companyName']}\t{provisioned['companyNameClean']}\t{provisioned['companyId']}\t{provisioned['companyId']}\t{provisioned['resultAbsoluteValue']}\t{provisioned['dateCom']}\t{provisioned['paymentDividend']}\t{provisioned['earningType']}\t{provisioned['dy']}\t{provisioned['recentEvents']}\t{provisioned['recentEvents']}\t{provisioned['uRLClear']}")
Seeing to the source code of that website one could fetch the json directly and get your desired links follow the below code.
Code:
import json
import requests
from bs4 import BeautifulSoup as bs
url = "https://statusinvest.com.br"
links=[]
req = requests.get(f"{url}/acoes/proventos/ibovespa")
soup = bs(req.content, 'html.parser')
data = json.loads(soup.find('input', attrs={'id': 'result'})["value"])
for datecom in data["dateCom"]:
links.append(f"{url}{datecom['uRLClear']}")
for datePayment in data["datePayment"]:
links.append(f"{url}{datePayment['uRLClear']}")
for provisioned in data["provisioned"]:
links.append(f"{url}{provisioned['uRLClear']}")
print(links)
Output:
Let me know if you have any questions :)

Scraping tracks' links from YouTube playlist with Beautiful Soup

I am trying to scrape all links for tracks from my playlist.
This is my code
from selenium import webdriver
from time import sleep
from bs4 import BeautifulSoup
from urllib.request import urlopen
import re
playlist = 'minimal_house'
url = 'https://www.youtube.com/channel/UCt2GxiTBN_RiE-cbP0cmk5Q/playlists'
html = urlopen(url)
soup = BeautifulSoup(html , 'html.parser')
tracks = soup.find(title = playlist).get('href')
print(tracks)
url = url + tracks
print(url)
html = urlopen(url)
soup = BeautifulSoup(html, 'html.parser')
links = soup.find_all('a',attrs={'class':'yt-simple-endpoint style-scope ytd-playlist-panel-video-renderer'})
print(links)
I couldn't scrape 'a'; nor by id; nor by class name.
it's my messy code that works for me:
from selenium import webdriver
from time import sleep
from bs4 import BeautifulSoup
from urllib.request import urlopen
import re
playlist = 'minimal_house'
url = 'https://www.youtube.com/channel/UCt2GxiTBN_RiE-cbP0cmk5Q/playlists'
html = urlopen(url)
soup = BeautifulSoup(html, 'html.parser')
tracks = soup.find('a', attrs={'title': playlist}).get('href')
print(tracks)
url = 'https://www.youtube.com' + str(tracks)
print(url)
html = urlopen(url)
soup = BeautifulSoup(html, 'html.parser')
links = soup.find_all('a')
links = set([link.get('href') for link in links if link.get('href').count('watch')])
print(links)
since class names change base on device request, it's better to get all links in this case.
and you need to use selenium to scroll down to fetch all the list.

BeautifulSoup doesn't shows all the elements in tag

I'm trying to parse this website and get information about auto in content-box card__body with BeautifulSoup.find but it doesn't find all classes. I also tried webdriver.PhantomJS(), but it also showed nothing.
Here is my code:
from bs4 import BeautifulSoup
from selenium import webdriver
url='http://www.autobody.ru/catalog/10230/217881/'
browser = webdriver.PhantomJS()
browser.get(url)
html = browser.page_source
soup = BeautifulSoup(html, 'html5lib')
JTitems = soup.find("div", attrs={"class":"content-box__strong red-text card__price"})
JTitems
or
w = soup.find("div", attrs={"class":"content-box card__body"})
w
Why doesn't this approach work? What should I do to get all the information about auto? I am using Python 2.7.
Find the table where your necessary info are. Then find all the tr and loop through them to get the texts.
from bs4 import BeautifulSoup
from selenium import webdriver
url='http://www.autobody.ru/catalog/10230/217881/'
browser = webdriver.Chrome()
browser.get(url)
html = browser.page_source
soup = BeautifulSoup(html, 'lxml')
price = soup.find('div', class_='price').get_text()
print(price)
for tr in soup.find('table', class_='tech').find_all('tr'):
print(tr.get_text())
browser.close()
browser.quit()

Categories