I'm scraping a website with python3 and BeautifullSoup and exporting into csv. The issue I am having is some elements are blank and when I print the page those elements are now missing. I would prefer it still prints even though it's blank. Due to this in my csv file the rows do not match with the columns when an element is blank. I am sure if I can get the print working as expected I can fix the issue in my csv file.
Example html code
<tr><td>item1</td><td>server11</td><td>env</td><td>uptime</td></tr>
<tr><td>item2</td><td></td><td>env</td><td>uptime</td></tr>
As you can see item2 has td tag which is blank
soup = BeautifulSoup(content, 'lxml')
for s in soup.findAll('tr'):
print(s.get_text(","))
The output is
item1,server11,env,uptime
item2,env,uptime
However I would the output to look like this
item1,server11,env,uptime
item2,,env,uptime
You can use str.join to join texts from all <td>.
For example:
from bs4 import BeautifulSoup
txt = '''
<tr><td>item1</td><td>server11</td><td>env</td><td>uptime</td></tr>
<tr><td>item2</td><td></td><td>env</td><td>uptime</td></tr>'''
soup = BeautifulSoup(txt, 'html.parser')
for tr in soup.select('tr'):
print(','.join(td.get_text(strip=True) for td in tr.select('td')))
Prints:
item1,server11,env,uptime
item2,,env,uptime
Related
I have some HTML file. I want to extract from here an unordered list. I have class name for this unordered list. And I am trying the following code:
soup =BeautifulSoup(HTML(open('dtaa.html').read()).__html__())
soup.find("ul",{"class":"name of class"})
dtaa.html is my file
This gives me nothing. This unordered list is inside 2 division. Maybe this is the problem.
Thanks in advance
You can read the HTML file like this:
with open("dtaa.html") as fp:
soup = BeautifulSoup(fp, 'html.parser')
soup.find("ul", attrs={"class":"name of class"})
You could also try another parser, like:
soup = BeautifulSoup(fp, "html5lib")
Documentation:
BeautifulSoup
Reading & Writing files
I am trying to scrape table data from this link
http://bet.hkjc.com/racing/pages/odds_wp.aspx?date=30-01-2017&venue=ST&raceno=2&lang=en
Here is my code
from lxml import html
import webbrowser
import re
import xlwt
import requests
import bs4
content = requests.get("http://bet.hkjc.com/racing/pages/odds_wp.aspx?date=30-01-2017&venue=ST&raceno=1&lang=en").text # Get page content
soup = bs4.BeautifulSoup(content, 'lxml') # Parse page content
table = soup.find('div', {'id': 'detailWPTable'}) # Locate that table tag
rows = table.find_all('tr') # Find all row tags in that table
for row in rows:
columns = row.find_all('td') # Find all data tags in each column
print ('\n')
for column in columns:
print (column.text.strip(),end=' ') # Output data in each column
It is not giving any output . Please help !
The table is generated by JavaScrip and requests will only return html code like the picture shows.
Use selemium
I'm looking at the last line of your code:
print (column.text.strip(),end=' ') # Output data in each column
Are you sure that should read column.text. Maybe you could try column.strings or column.get_text(). Or column.stripped_strings even
I just wanted to mention that id you are using are for the wrapping div, not for the child table element.
Maybe you could try something like:
wrapper = soup.find('div', {'id': 'detailWPTable'})
table_body = wrapper.table.tbody
rows = table_body.find_all('tr')
But thinking about it, the tr elements are also descendants of the wrapping div, so find_all should still find them %]
Update: adding tbody
Update: sorry I'm not allowed to comment yet :). Are you sure you have the correct document. Have you checked the whole soup that the tags are actually there?
And I guess all those lines could be written as:
rows = soup.find('div', {'id': 'detailWPTable'}).find('tbody').find_all('tr')
Update: Yeah the wrapper div is empty. So it seems that you don't get whats being generated by javascript like the other guy said. Maybe you should try Selenium as he suggested? Possibly PhantomJS as well.
You can try it with dryscrape like so:
import dryscrape
from bs4 import BeautifulSoup as BS
import re
import xlwt
ses=dryscrape.Session()
ses.visit("http://bet.hkjc.com/racing/pages/odds_wp.aspx?date=30-01-2017&venue=ST&raceno=1&lang=en")
soup = BS(ses.body(), 'lxml') # Parse page content
table = soup.find('div', {'id': 'detailWPTable'}) # Locate that table tag
rows = table.find_all('tr') # Find all row tags in that table
for row in rows:
columns = row.find_all('td') # Find all data tags in each column
print ('\n')
for column in columns:
print (column.text.strip())
I have added an snippet of the html i wish to scrape.
I would like to go through each row (tbody) and scrape the relevant data using xml.
the xss for each row can be found by the following:
//*[#id="re_"]/table/tbody
but im unsure how to set it up in python to loop through each tbody? there is not set number for the tbody rows so could range from any number.
eg.
for each tbody:
...get data
below is the HTML page
http://www.racingpost.com/horses/result_home.sd?race_id=651402&r_date=2016-06-07&popup=yes#results_top_tabs=re_&results_bottom_tabs=ANALYSIS
Using lxml, you can pull the table directly using the class name and extract all the tbody tags with the xpath //table[#class="grid resultRaceGrid"]/tbody
from lxml import html
x = html.parse("http://www.racingpost.com/horses/result_home.sd?race_id=651402&r_date=2016-06-07&popup=yes#results_top_tabs=re_&results_bottom_tabs=ANALYSIS")
tbodys= x.xpath('//table[#class="grid resultRaceGrid"]/tbody')
# iterate over the list of tbody tags
for tbody in tbodys:
# get all the rows from the tbody
for row in tbody.xpath("./tr"):
# extract the tds and do whatever you want.
tds = row.xpath("./td")
print(tds)
Obviously you can be more specific, the td tags have class names which you can use to extract and some tr tags also have classes.
I'm thinking you'd be interested in BeautifulSoup.
With your data, if you wanted to print all comment texts, it would be as simple as:
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, 'html.parser')
for tbody in soup.find_all('tbody'):
print tbody.find('.commentText').get_text()
You can do much more fancy stuff. You can read more here.
I need to get the href links that the contents point to under a particular column from a table in wikipedia. The page is "http://en.wikipedia.org/wiki/List_of_Telugu_films_of_2015". On this page there are a few tables with class "wikitable". I need the links of the contents under the column Title for each row that they point to. I would like them to be copied onto an excel sheet.
I do not know the exact code of searching under a particular column but I came upto this far and I am getting a "Nonetype object is not callable". I am using bs4. I wanted to extract atleast somepart of the table so I could figure out narrowing to the href links under the Title column I want but I am ending up with this error. The code is as below:
from urllib.request import urlopen
from bs4 import BeautifulSoup
soup = BeautifulSoup(urlopen('http://en.wikipedia.org/wiki/List_of_Telugu_films_of_2015').read())
for row in soup('table', {'class': 'wikitable'})[1].tbody('tr'):
tds = row('td')
print (tds[0].string, tds[0].string)
A little guidance appreciated. Anyone knows?
Figured out that the none type error might be related to the table filtering. Corrected code is as below:
import urllib2
from bs4 import BeautifulSoup, SoupStrainer
content = urllib2.urlopen("http://en.wikipedia.org/wiki/List_of_Telugu_films_of_2015").read()
filter_tag = SoupStrainer("table", {"class":"wikitable"})
soup = BeautifulSoup(content, parse_only=filter_tag)
links=[]
for sp in soup.find_all(align="center"):
a_tag = sp('a')
if a_tag:
links.append(a_tag[0].get('href'))
import urllib, urllib2
from bs4 import BeautifulSoup, Comment
url='http://www.amazon.in/product-reviews/B00CE2LUKQ/ref=cm_cr_pr_top_link_1?ie=UTF8&showViewpoints=0&sortBy=bySubmissionDateDescending'
content = urllib2.urlopen(url).read()
soup = BeautifulSoup(content, "html.parser")
rows =soup.find_all('div',attrs={"class" : "reviewText"})
print rows
This code is used to extract the reviews from the website. I need only the text - but I get them with the div tags.
I need help regarding how the text alone gets extracted. I require the text alone-between the div class tags.
for row in soup.find_all('div',attrs={"class" : "reviewText"}):
print row.text
or:
[row.text for row in rows]