HTML parsing with BeautifulSoup in Python unknown error - python

I know that this code works for other websites that end in .com
However I noticed that the code doesn't work if I try to parse websites that end in .kr
Can somebody help to find why this is happening and an alternate solution to parse these types of websites?
Following is my code.
import requests
from bs4 import BeautifulSoup
URL = 'https://everytime.kr/#nN4K1XC0weHnnM9VB5Qe'
page = requests.get(URL)
soup = BeautifulSoup(page.content, 'html.parser')
results = soup.find(id='container')
print(results)
The URL here is a link to my timetable. I need to parse this website so that I can easily collect the information for the subjects and data relevant to the subject (duration, location, professor's name, etc.).
Thanks

Website is serving dynamic content and you get an empty response back - you may use selenium.
Example
from selenium import webdriver
from bs4 import BeautifulSoup
import time
driver = webdriver.Chrome(executable_path=r'C:\Program Files\ChromeDriver\chromedriver.exe')
url = 'https://everytime.kr/#nN4K1XC0weHnnM9VB5Qe'
driver.get(url)
time.sleep(5)
soup = BeautifulSoup(driver.page_source, 'html.parser')
results = soup.find(id='container')
print(results)
driver.close()

Related

BeautifulSoup.select classname not working

I am trying to find tags by CSS class, using BeautifulSoup.
Read documentation and tried different ways but the below code returns new_elem : [].
Could you help me understand what I am doing wrong? Thanks.
import requests
from bs4 import BeautifulSoup
url = "https://solanamonkeysclub.com/#/#mint"
response = requests.get(url)
response.encoding = response.apparent_encoding
soup = BeautifulSoup(response.text, 'html.parser')
new_elems = str(soup.select('.ant-card-body'))
print(f'{"new_elem":10} : {new_elems}')
As the url is dynamic,I use selenium with bs4 and getting the follwing output:
Code:
import requests
from bs4 import BeautifulSoup
import time
from selenium import webdriver
driver = webdriver.Chrome('chromedriver.exe')
url = "https://solanamonkeysclub.com/#/#mint"
driver.get(url)
time.sleep(8)
soup = BeautifulSoup(driver.page_source, 'html.parser')
new_elems = soup.select('.ant-card-body')
for new_elem in new_elems:
print(f'{"new_elem":10} : {new_elem.text}')
OUTPUT:
new_elem : 0
new_elem : 0
Have you looked at the output at all? You should bring up this page in a browser and do a "view source", or do print(response.text) after you fetch it. The page as delivered contains no HTML elements. The entire page is built dynamically using Javascript.
You will need do use something like Selenium to scrape it.

Can't identify a table when scraping

Beginner question.. I'm attempting to scrape data from a table but I can't seem to recognize it, I've tried using the class and the id to identify it but my result is 0. The code and output are below.
# Import necessary packages
from bs4 import BeautifulSoup
import requests
import pandas as pd
import re
# Site URL
url="https://fbref.com/en/comps/9/stats/Premier-League-Stats"
# Make a GET request to fetch the raw HTML content
html_content = requests.get(url).text
# Parse HTML code for the entire site
soup = BeautifulSoup(html_content, "lxml")
#print(soup.prettify()) # print the parsed data of html
gdp = soup.find_all("table", attrs={"id": "stats_standard"})
print("Number of tables on site: ",len(gdp))
Output - 'Number of tables on site: 0'
I suggest you to use selenium for such scraping, its performance is very reliable.
This code will work for you:
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
option = Options()
option.add_argument('--headless')
url = 'https://fbref.com/en/comps/9/stats/Premier-League-Stats'
driver = webdriver.Chrome(options=option)
driver.get(url)
bs = BeautifulSoup(driver.page_source, 'html.parser')
gdp = bs.find_all('table', {'id': 'stats_standard'})
driver.quit()
print("Number of tables on site: ",len(gdp))
Output
Number of tables on site: 1
Can you find the table(s) without using attrs={"id": "stats_standard"}?
I have checked and indeed I cannot find any table whose ID is stats_standard (but there is one with ID stats_standard_sh, for example). So I guess you might be using the wrong ID.

Beautifulsoup can not find table containing specific class

from bs4 import BeautifulSoup
import requests
url="https://www.calculator.net/currency-calculator.html"
# Make a GET request to fetch the raw HTML content
html_content = requests.get(url).text
# Parse the html content
soup = BeautifulSoup(html_content,'html5lib')
print(soup.prettify()) # print the parsed data of html
conv_table = soup.find("table", attrs={"class":"cinfoT "})
conv_data = gdp_table.tbody.find_all("tr")
I have written the above script to get the table listed on this particular website.
when i run the same conv_table comes as None type object.
If you visit the website, basically i want to extract the 2nd table bigger table and its class name contains "cinfoT ". Also i have checked that there are some blank spaces in the class name.
Please help me out.
Thanks in advance.
It is because this data is loaded by javascript. Try selenium. requests will give you plain html file
from selenium import webdriver
from bs4 import BeautifulSoup
DRIVER_PATH="Your selenium chrome driver path"
url = 'https://www.calculator.net/currency-calculator.html'
driver = webdriver.Chrome(executable_path=DRIVER_PATH)
driver.get(url)
html_soup = BeautifulSoup(driver.page_source, 'html.parser')
table = html_soup.find_all('table', class_ = 'cinfoT ')
driver.quit()
print(table[0].tbody)

How to get some data in real time from a website using python?

I want to fetch som data from website
https://web.sensibull.com/optionchain?expiry=2020-03-26&tradingsymbol=NIFTY
I am using beautifulsoup library to fetch this data, and have tried the following code:
import requests
import urllib.request
import time
from bs4 import BeautifulSoup
url = 'https://web.sensibull.com/optionchain?expiry=2020-03-26&tradingsymbol=NIFTY'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
b = soup.find("div", {"class": "style__AtmIVWrapper-idZNMX kUMMRI"})
print(b)
But it shows "None" as the output.
Although there is only one class of this name in the full HTML code, but I also tried this:
for b in soup.find_all('div', attrs={'class':'style__AtmIVWrapper-idZNMX kUMMRI'}):
print(b.get_text())
print(len(b))
But it doesn't work.
Also tried soup.find("div")
But it does not shows the required div tag in the output, maybe due to nested divs present.
Unable to fetch this data and proceed with my work. Please help.
If you are looking for code. This might help:-
from selenium import webdriver
import time
webpage = 'https://web.sensibull.com/optionchain?expiry=2020-03-26&tradingsymbol=NIFTY'
driver = webdriver.Chrome(executable_path='Your/path/to/chromedriver.exe')
driver.get(webpage)
time.sleep(10)
nifty_fut = driver.find_element_by_xpath('//*[#id="app"]/div/div[4]/div[2]/div[3]/div/div/div[2]/div[1]/div[1]/div/button/span[1]/div[1]')
print(nifty_fut.text)
atm_iv = driver.find_element_by_xpath('//*[#id="app"]/div/div[4]/div[2]/div[3]/div/div/div[2]/div[1]/div[2]')
print(atm_iv.text)
driver.quit()
Could be a syntax problem try with soup.find_all("div", class_="style__AtmIVWrapper-idZNMX kUMMRI") or just soup.find("div", class_="style__AtmIVWrapper-idZNMX kUMMRI")
If interested in webscraping and bs4 take a look at the documentation https://www.crummy.com/software/BeautifulSoup/bs4/doc/#find

Beautifulsoup not returning complete HTML of the page

I have been digging on the site for some time and im unable to find the solution to my issue. Im fairly new to web scraping and trying to simply extract some links from a web page using beautiful soup.
url = "https://www.sofascore.com/pt/futebol/2018-09-18"
page = urlopen(url).read()
soup = BeautifulSoup(page, "lxml")
print(soup)
At the most basic level, all im trying to do is access a specific tag within the website. I can work out the rest for myself, but the part im struggling with is the fact that a tag that I am looking for is not in the output.
For example: using the built in find() I can grab the following div class tag:
class="l__grid js-page-layout"
However what i'm actually looking for are the contents of a tag that is embedded at a lower level in the tree.
js-event-list-tournament-events
When I perform the same find operation on the lower-level tag, I get no results.
Using Azure-based Jupyter Notebook, i have tried a number of the solutions to similar problems on stackoverflow and no luck.
Thanks!
Kenny
The page use JS to load the data dynamically so you have to use selenium. Check below code.
Note you have to install selenium and chromedrive (unzip the file and copy into python folder)
import time
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
url = "https://www.sofascore.com/pt/futebol/2018-09-18"
options = Options()
options.add_argument('--headless')
options.add_argument('--disable-gpu')
driver = webdriver.Chrome(chrome_options=options)
driver.get(url)
time.sleep(3)
page = driver.page_source
driver.quit()
soup = BeautifulSoup(page, 'html.parser')
container = soup.find_all('div', attrs={
'class':'js-event-list-tournament-events'})
print(container)
or you can use their json api
import requests
url = 'https://www.sofascore.com/football//2018-09-18/json'
r = requests.get(url)
print(r.json())
I had the same problem and the following code worked for me. Chromedriver must be installed!
import time
from bs4 import BeautifulSoup
from selenium import webdriver
chromedriver_path= "/Users/.../chromedriver"
driver = webdriver.Chrome(chromedriver_path)
url = "https://yourURL.com"
driver.get(url)
time.sleep(3) #if you want to wait 3 seconds for the page to load
page_source = driver.page_source
soup = bs4.BeautifulSoup(page_source, 'lxml')
This soup you can use as usual.

Categories