I wanted to download the data from a page in which the link of each data are found in rows of a table.
I wrote a code using BeautifulSoup to read href of all rows, but it couldn't provide me the links list to download them. I guess it couldn't see table data (td) in each table row (tr).
from bs4 import BeautifulSoup
import urllib.request
testurl = 'https://www.ercot.com/mp/data-products/data-product-details?id=NP3-562-CD'
page = urllib.request.urlopen(testurl)
page_content = BeautifulSoup(page, "html.parser")
table_dt = page_content.find_all("table")
for tt in table_dt.select("tr"):
print(tt)
## print
<tr>
<th>Friendly Name</th>
<th colspan="2">Posted</th>
<th>Available Files</th>
</tr>##
The table shows:
[<table class="table table-condensed report-table" id="reportTable">
<thead>
<tr>
<th>Friendly Name</th>
<th colspan="2">Posted</th>
<th>Available Files</th>
</tr>
</thead>
<tbody>
</tbody>
</table>]
As it can be seen, there is no info for other rows (tr), and it only captures the header row information.
Could you please guide me to get data the link of data for each rows in order to download them?
Most likely, the structure of the table is in the original HTML page, and the row data is retrieved by a Javascript request. If you can figure out what the javacript request is (probably by using your browser's "web developer" tools), you can get it that way.
Related
In my previous question(How to speed up parsing using BeautifulSoup?), I asked the way to crawl HTML website more quickly, and the answer helped me much.
But I encountered another problem. It is about crawling the price of tickets.
I got JSON text in the webpage referring the answer of my previous question. I could get almost every information about festivals in the JSON, such as title, date, location, poster image url, and performers.
But there was no info about pricing, so I tried to get the price in other part of the website.
When I turned on Google Chrome developer mode, there is a table about pricing (It includes Korean, but you don't have to understand it):
<table cellpadding="0" cellspacing="0">
<colgroup>
<col>
<col style="width:20px;">
<col>
</colgroup>
<tbody id="divSalesPrice">
<tr>
<td>2일권(입장권)</td>
<td> </td>
<td class="costTd">
<span>140,000 원</span>
</td>
</tr>
<tr>
<td>1일권(입장권)</td>
<td> </td>
<td class="costTd">
<span>88,000 원</span>
</td>
</tr>
</tbody>
</table>
Numbers in span tag (140000, 80000) are the prices I want to extract. So I thought using Soup will be effective:
from bs4 import BeautifulSoup
import requests
def Soup(content):
soup = BeautifulSoup(content, 'lxml')
return soup
def DetailLink(url):
req = requests.get(url)
soup = Soup(req.content)
spans = soup.findAll('span', class_='fw_bold')
links = [f'{url[:27]}{span.a["href"]}' for span in spans]
return links
def Price():
links = DetailLink('http://ticket.interpark.com/TPGoodsList.asp?Ca=Liv&SubCa=Fes')
with requests.Session() as request:
for link in links:
req = request.get(link)
soup = Soup(req.content)
price = soup.find('tbody', id='divSalesPrice')
print(price)
Price()
However, the result was disappointing...
<tbody id="divSalesPrice">
<!-- 등록된 기본가 가져오기 오류-->
<tr>
<td colspan="3" id="liBasicPrice">
<ul>
</ul>
</td>
</tr>
</tbody>
The comment '등록된 기본가 가져오기 오류' means 'An error occurred while getting the price.'
Is it means that a website operator blocked other users to crawl price info in the page?
Ok, if we look carefully, the price data is not get when you request the page, it's loaded afterwards, that means we need to get the price data from somewhere else.
If you inspect the network section in chrome, there is this strange url:
And it has the data you look for:
Now the only thing you need to do is get the place id and product id. You can get these from homepage as you can see:
The vPC is the location id and vGC is the product id, you can get the product id from url too.
Then this code explains the rest:
import requests, re, json
# Just a random product url, you can adapt the code into yours.
url = "http://ticket.interpark.com/Ticket/Goods/GoodsInfo.asp?GroupCode=20002746"
data = requests.get(url).text
# I used regex to get the matching values `vGC` and `vPC`
vGC = re.search(r"var vGC = \"(\d+)\"", data).groups()[0]
vPC = re.search(r"var vPC = \"(\d+)\"", data).groups()[0]
# Notice that I placed placeholders to use `format`. Placeholders are `{}`.
priceUrl = "http://ticket.interpark.com/Ticket/Goods/GoodsInfoJSON.asp?Flag=SalesPrice&GoodsCode={}&PlaceCode={}"
# Looks like that url needs a referer url and that is the goods page, we will pass it as header.
lastData = requests.get(priceUrl.format(vGC, vPC), headers={"Referer": url}).text
# As the data is a javascript object but inside it is a json object,
# we can remove the callback and parse the inside of callback as json data:
lastData = re.search(r"^Callback\((.*)\);$", lastData).groups()[0]
lastData = json.loads(lastData)["JSON"]
print(lastData)
Output:
[{'DblDiscountOrNot': 'N',
'GoodsName': '뷰티풀 민트 라이프 2020 - 공식 티켓',
'PointDiscountAmt': '0',
'PriceGradeName': '입장권',
'SalesPrice': '140000',
'SeatGradeName': '2일권'},
{'DblDiscountOrNot': 'N',
'GoodsName': '뷰티풀 민트 라이프 2020 - 공식 티켓',
'PointDiscountAmt': '0',
'PriceGradeName': '입장권',
'SalesPrice': '88000',
'SeatGradeName': '1일권'}]
As I've recently started learning web scraping, I thought I would try to parse an HTML table from this site using requests and bs4 modules.
I know I need to access td class from tbody -- this is how a web page looks like at least:
When I try, though, it doesn't seem to work properly as it only captures td class from thead and not from tbody. Hence, I cannot capture anything but the headers of the table.
I assume it has something to do with requests module.
url = 'https://vstup.edbo.gov.ua/statistics/requests-by-university/?
qualification=1&education-base=40'
r = requests.get(url)
print(r.text)
The result is as follows (pasting table-related part):
<table id="stats">
<caption></caption>
<thead>
<tr>
<td class="region">Регіон</td>
<td class="university">Назва закладу</td>
<td class="speciality">Спеціальність (спеціалізація)</td>
<td class="average-ball number" title="Середній конкурсний бал">СКБ</td>
<td class="requests-total number">Усього заяв</td>
<td class="requests-budget number">Заяв на бюджет</td>
</tr>
</thead>
<tbody></tbody>
</table>
So the tbody elements are missing in my response object, while they are present in the code of the web page. What am I doing wrong?
#Holdenweb suggested trying Selenium and everything worked.
from selenium import webdriver
from bs4 import BeautifulSoup
url = 'https://vstup.edbo.gov.ua/statistics/requests-by-university/?
qualification=1&education-base=40'
browser = webdriver.Firefox(executable_path=r'D:/folder/geckodriver.exe')
browser.get(url)
html = browser.page_source
after that, I used BeautifulSoup and managed to parse the web page.
Sorry for this silly question as I'm new to web scraping and have no knowledge about HTML etc.
I'm trying to scrape data from this website. Specifically, from this part/table of the page:
末"四"位数 9775,2275,4775,7275
末"五"位数 03881,23881,43881,63881,83881,16913,66913
末"六"位数 313110,563110,813110,063110
末"七"位数 4210962,9210962,9785582
末"八"位数 63262036
末"九"位数 080876872
I'm sorry that's in Chinese and it looks terrible since I can't embed the picture. However, The table is roughly in the middle(40 percentile from the top) of the page. The table id is 'tr_zqh'.
Here is my source code:
import bs4 as bs
import urllib.request
def scrapezqh(url):
source = urllib.request.urlopen(url).read()
page = bs.BeautifulSoup(source, 'html.parser')
print(page)
url = 'http://data.eastmoney.com/xg/xg/detail/300741.html?tr_zqh=1'
print(scrapezqh(url))
It scrapes most of the table but the part that I'm interested in. Here is a part of what it returns where I think the data should be:
<td class="tdcolor">网下有效申购股数(万股)
</td>
<td class="tdwidth" id="td_wxyxsggs">
</td>
</tr>
<tr id="tr_zqh">
<td class="tdtitle" id="td_zqhrowspan">中签号
</td>
<td class="tdcolor">中签号公布日期
</td>
<td class="ltxt" colspan="3"> 2018-02-22 (周四)
</td>
I'd like to get the content of this table: tr id="tr_zqh" (the 6th row above). However for some reason it doesn't scrape its data(No content below). However, when I check the source code of the webpage, the data are in the table. I don't think it is a dynamic table which BeautifulSoup4 can't handle. I've tried both lxml and html parser and I've tried pandas.read_html. It returned the same results. I'd like to get some help to understand why it doesn't get the data and how I can fix it. Many thanks!
Forgot to mention that I tried page.find('tr'), it returned a part of the table but not the lines I'm interested. Page.find('tr') returns the 1st line of the screenshot. I want to get the data of the 2nd & 3rd line(highlighted in the screenshot)
If you extract a couple of variables from the initial page you can use themto make a request to the api directly. Then you get a json object which you can use to get the data.
import requests
import re
import json
from pprint import pprint
s = requests.session()
r = s.get('http://data.eastmoney.com/xg/xg/detail/300741.html?tr_zqh=1')
gdpm = re.search('var gpdm = \'(.*)\'', r.text).group(1)
token = re.search('http://dcfm.eastmoney.com/em_mutisvcexpandinterface/api/js/get\?type=XGSG_ZQH&token=(.*)&st=', r.text).group(1)
url = "http://dcfm.eastmoney.com/em_mutisvcexpandinterface/api/js/get?type=XGSG_ZQH&token=" + token + "&st=LASTFIGURETYPE&sr=1&filter=%28securitycode='" + gdpm + "'%29&js=var%20zqh=%28x%29"
r = s.get(url)
j = json.loads(r.text[8:])
for i in range (len(j)):
print ( j[i]['LOTNUM'])
#pprint(j)
Outputs:
9775,2275,4775,7275
03881,23881,43881,63881,83881,16913,66913
313110,563110,813110,063110
4210962,9210962,9785582
63262036
080876872
From where I look at things your question isn't clear to me. But here's what I did.
I do a lot of webscraping so I just made a package to get me beautiful soup objects of any webpage. Package is here.
So my answer depends on that. But you can take a look at the sourcecode and see that there's really nothing esoteric about it. You may drag out the soup-making part and use as you wish.
Here we go.
pip install pywebber --upgrade
from pywebber import PageRipper
page = PageRipper(url='http://data.eastmoney.com/xg/xg/detail/300741.html?tr_zqh=1', parser='html5lib')
page_soup = page.soup
tr_zqh_table = page_soup.find('tr', id='tr_zqh')
from here you can do tr_zqh_table.find_all('td')
tr_zqh_table.find_all('td')
Output
[
<td class="tdtitle" id="td_zqhrowspan">中签号
</td>, <td class="tdcolor">中签号公布日期
</td>, <td class="ltxt" colspan="3"> 2018-02-22 (周四)
</td>
]
Going a bit further
for td in tr_zqh_table.find_all('td'):
print(td.contents)
Output
['中签号\n ']
['中签号公布日期\n ']
['\xa02018-02-22 (周四)\n ']
I read the answers to Parse HTML table to Python list? and tried to use the ideas to read/process my local html downloaded from a web site
(the files contain one table and start with the <table class="table"> label). I ran into problems due to the presence of two html tags.
With the <thead> label the parse doesn't pick up the header, and the <tbody> causes both xml and lxml to completely fail.
I tried googling for a solution but the answer most likely is embedded in some documentation somewhere for xml and/or lxml.
I'm just trying to plug into xml or lxml in the simplest way possible, but would be happy if the community here pointed the way to other 'stable/trusted' modules that might be more appropriate.
I realized I could edit the strings in python to remove the tags, but that is not too elegant, and I'm trying to learn new things.
Here is the stripped down sample code illustrating the problem:
#--------*---------*---------*---------*---------*---------*---------*---------*
# Desc: Parse HTML table to list
#--------*---------*---------*---------*---------*---------*---------*---------*
import os, sys
from xml.etree import ElementTree as ET
from lxml import etree
# # this setting blows up
s = """<table class="table">
<thead>
<tr><th>PU</th><th>CA</th><th>OC</th><th>Range</th></tr>
</thead>
<tbody>
<tr>
<td>UTG</td><td></td><td>
</td><td>2.7%, KK+ AQs+ A5s AKo </td>
</tr>
<tr>
<td></td><td>BB</td><td>
</td><td>10.6%, 55+ A9s+ A9o+ </td>
</tr>
</tbody>
</table>
"""
# # open this up for clear sailing
if False:
s = """<table class="table">
<tr><th>PU</th><th>CA</th><th>OC</th><th>Range</th></tr>
<tr>
<td>UTG</td><td></td><td>
</td><td>2.7%, KK+ AQs+ A5s AKo </td>
</tr>
<tr>
<td></td><td>BB</td><td>
</td><td>10.6%, 55+ A9s+ A9o+ </td>
</tr>
</table>
"""
s = s.replace('\n','')
print('0:\n'+s)
while True:
table = ET.XML(s)
rows = iter(table)
for row in rows:
values = [col.text for col in row]
print('1:')
print(values)
break
while True:
table = etree.HTML(s).find("body/table")
rows = iter(table)
for row in rows:
values = [col.text for col in row]
print('2:')
print(values)
break
sys.exit()
While waiting for some help showing how to do this in a 'Pythonic way', I came up with an easy brute force method:
With the string s set to the 2nd option, with the given <thead> and <tbody> labels, apply the following code:
s = ''.join(s.split('<tbody>'))
s = ''.join(s.split('</tbody>'))
s = ''.join(s.split('<thead>'))
s = ''.join(s.split('</thead>'))
I'm trying to extract data from a website using beautiful soup to parse the html. I'm currently trying to get the table data from the following webpage :
link to webpage
I want to get the data from the table. First I save the page as an html file on my computer (this part works fine, I checked that I got all the information) but when I try to parse with the following code :
soup = BeautifulSoup(fh, 'html.parser')
table = soup.find_all('table')
cols = table[0].find_all('tr')
cells = cols[1].find_all('td')`
I don't get any results (specifically it crashes, saying there's no element at index 1). Any idea of where it could come from?
Thanks
Ok actually it was an issue in the html file, in the first line the html tags were opened with th but closed with td. I don't know much about HTML but replacing the th by td solved the issue.
<tr class="listeEtablenTete">
<th title="Rubrique IC">Rubri. IC</td>
<th title="Alinéa">Ali. </td>
<th title="Date d'autorisation">Date auto.</td>
<th >Etat d'activité</td>
<th title="Régime">Rég.</td>
<th >Activité</td>
<th >Volume</td>
<th >Unité</td>`
Thanks !