Extract Colored Text from Table with BeautifulSoup - python

I am new to Python and fairly new to programming in general. I'm trying to work out a script that uses BeautifulSoup to parse https://www.state.nj.us/mvc/ for any text that's red. The table I'm looking at is relatively simple HTML:
<html>
<body>
<div class="alert alert-warning alert-dismissable" role="alert">
<div class="table-responsive">
<table class="table table-sm" align="center" cellpadding="0" cellspacing="0">
<tbody>
<tr>
<td width="24%">
<strong>
<font color="red">Bakers Basin</font>
</strong>
</td>
<td width="24%">
<strong>Oakland</strong>
</td>
...
...
...
</tr>
</tbody>
</table>
</div>
</div>
</body>
</html>
From the above I want to find Bakers Basin, but not Oakland, for example.
Here's the Python I've written (adapted from Cory Althoff The Self-Taught Programmer, 2017, Triangle Connection LCC):
import urllib.request
from bs4 import BeautifulSoup
class Scraper:
def __init__(self, site):
self.site = site
def scrape(self):
r = urllib.request.urlopen(self.site)
html = r.read()
parser = "html.parser"
soup = BeautifulSoup(html, parser)
tabledmv = soup.find_all("font color=\"red\"")
for tag in tabledmv:
print("\n" + tabledmv.get_text())
website = "https://www.state.nj.us/mvc/"
Scraper(website).scrape()
I seem to be missing something here though because I can't seem to get this to scrape through the table and return anything useful. The end result is I want to add the time module and run this every X minutes, then to have it log a message somewhere for when each site goes red. (This is all so my wife can figure out the least crowded DMV to go to in New Jersey!).
Any help or guidance is much appreciated on getting the BeautifulSoup bit working.

The table is actually loaded from this site.
To only get text that's red you can use the CSS selector soup.select('font[color="red"]') as #Mr. Polywhirl mentioned:
import urllib.request
from bs4 import BeautifulSoup
class Scraper:
def __init__(self, site):
self.site = site
def scrape(self):
r = urllib.request.urlopen(self.site)
html = r.read()
parser = "html.parser"
soup = BeautifulSoup(html, parser)
tabledmv = soup.select('font[color="red"]')[1:]
for tag in tabledmv:
print(tag.get_text())
website = "https://www.state.nj.us/mvc/locations/agency.htm"
Scraper(website).scrape()

The data is loaded from other location, in this case 'https://www.state.nj.us/mvc/locations/agency.htm'. To get the towns + header for each town, you can use this example:
import requests
from bs4 import BeautifulSoup
url = 'https://www.state.nj.us/mvc/locations/agency.htm'
soup = BeautifulSoup(requests.get(url).content, 'html.parser')
for t in soup.select('td:has(font)'):
i = t.find_previous('tr').select('td').index(t)
if i < 2:
print('{:<20} {}'.format(' '.join(t.text.split()), 'Licensing Centers'))
else:
print('{:<20} {}'.format(' '.join(t.text.split()), 'Vehicle Centers'))
Prints:
Bakers Basin Licensing Centers
Cherry Hill Vehicle Centers
Springfield Vehicle Centers
Bayonne Licensing Centers
Paterson Licensing Centers
East Orange Vehicle Centers
Trenton Vehicle Centers
Rahway Licensing Centers
Hazlet Vehicle Centers
Turnersville Vehicle Centers
Jersey City Vehicle Centers
Wallington Vehicle Centers
Delanco Licensing Centers
Lakewood Vehicle Centers
Washington Vehicle Centers
Eatontown Licensing Centers
Edison Licensing Centers
Toms River Licensing Centers
Newton Vehicle Centers
Freehold Licensing Centers
Runnemede Vehicle Centers
Newark Licensing Centers
S. Brunswick Vehicle Centers

Related

Beautifulsoup4 - not selcting all instances of span class

I am attempting to scrape data from a website that uses non-specific span classes to format/display content. The pages present information about chemical products and each product is described within a single div class.
I first parsed by that div class and am working to pull the data I need from there. I have been able to get many things but the parts I cant seem to pull are within the span class "ppisreportspan"
If you look at the code, you will note that it appears multiple times within each chemical description.
<tr>
<td><h4 id='stateprod'>MAINE STATE PRODUCT REPORT</h4><hr class='report'><span style="color:Maroon;" Class="subtitle">Company Number: </span><span style='color:black;' Class="subtitle">38</span><br /><span Class="subtitle">MONSANTO COMPANY <br/>800 N. LINDBERGH BOULEVARD <br/>MAIL STOP FF4B <br/>ST LOUIS MO 63167-0001<br/></span><br/><span style="color:Maroon;" Class="subtitle">Number of Currently Registered Products: </span><span style='color:black; font-size:14px' class="subtitle">80</span><br /><br/><p class='noprint'><img alt='' src='images/epalogo.png' /> View the label in the US EPA Pesticide Product Label System (PPLS).<br /><img alt='' src='images/alstar.png' /> View the label in the Accepted Labels State Tracking and Repository (ALSTAR).<br /></p>
<hr class='report'>
<div class='nopgbrk'>
<span class='ppisreportspanprodname'>PRECEPT INSECTICIDE </span>
<br/>EPA Registration Number: <a href = "http://iaspub.epa.gov/apex/pesticides/f?p=PPLS:102:::NO::P102_REG_NUM:100-1075" target='blank'>100-1075-524 <img alt='EPA PPLS Link' src='images/pplslink.png'/></a>
<span class='line-break'></span>
<span class=ppisProd>ME Product Number: </span>
<**span class="ppisreportspan"**>2014000996</span>
<br />Registration Year: <**span class="ppisreportspan"**>2019</span>
Type: <span class="ppisreportspan">RESTRICTED</span><br/><br/>
<table width='100%'>
<tr>
<td width='13%'>Percent</td>
<td style='width:87%;align:left'>Active Ingredient</td>
</tr>
<tr>
<td><span class="ppisreportspan">3.0000</span></td>
<td><span class="ppisreportspan">Tefluthrin (128912)</span></td>
</tr>
</table><hr />
</div>
<div class='nopgbrk'>
<span class='ppisreportspanprodname' >ACCELERON IC-609 INSECTICIDE SEED TREATMENT FOR CORN</span>
<br/>EPA Registration Number: <a href = "http://iaspub.epa.gov/apex/pesticides/f?p=PPLS:102:::NO::P102_REG_NUM:264-789" target='blank'>264-789-524 <img alt='EPA PPLS Link' src='images/pplslink.png'/>
</a><span class='line-break'></span>
<span class=ppisProd>ME Product Number: <a href = "alstar_label.aspx?LabelId=116671" target = 'blank'>2009005053</span>
<img alt='ALSTAR Link' src='images/alstar.png'/></a>
<br />Registration Year: <span class="ppisreportspan">2019</span>
<br/>
<table width='100%'>
<tr>
<td width='13%'>Percent</td>
<td style='width:87%;align:left'>Active Ingredient</td>
</tr>
<tr>
<td><span class="ppisreportspan">48.0000</span></td>
<td><span class="ppisreportspan">Clothianidin (44309)</span></td>
</tr>
</table><hr />
</div>
This sample includes two chemicals. One has an "alstar" ID and link and one does not. Both have registration years. Those are the data points that are hanging me up.
You may also note that there is a 10 digit code stored in "ppisreportspan" in the first example. I was able to extract that as part of the "ppisProd" span for nay record that doesn't have the Alstar link. I don't understand why, but it reinforces the point that it seems my parsing process ignores that span class.
I have tried various methods for the last 2 days based on all kinds of different answers on SO, so I can't possibly list them all. I seem to be able to either get anything from the first "span" to the end on the last span, or I get "nonetype" errors or empty lists.
This one gets the closest:
It returns the correct spans for many div chunks but it still skips (returns blank tuple []) for any of the ones that have alstar links like the second one in the example.
picture showing data and then a series of three sets of empty brackets where the data should be
import urllib.request, urllib.parse, urllib.error
from bs4 import BeautifulSoup
import ssl
import re
url = input('Enter URL:')
hand = open(url)
soup = BeautifulSoup(hand, 'html.parser')
#create a list of chunks by product (div)
products = soup.find_all('div' , class_ ='nopgbrk')
print(type(products))
print(len(products))
tempalstars =[]
rptspanclasses = []
regyears = []
alstarIDs = []
asltrlinks = []
# read the span tags
for product in products:
tempalstar = product.find_all('span', class_= "ppisreportspan")
tempalstars.append(tempalstar)
print(tempalstar)
Ultimately, I want to be able to select the text for the year as well as the Alstar link out of these span statements for each div chunk, but I will be cross that bridge when I can get the code finding all the instances of that class.
Alternately - Is there some easier way I can get the Registration year and the Alstar link (eg. <a href = "alstar_label.aspx?LabelId=116671" target = 'blank'>2009005053</span> <img alt='ALSTAR Link' src='images/alstar.png'/></a>) rather than what I am trying to do?
I am using Python 3.7.2 and Thank you!
I managed to get some data from this site. All you need to know is the company number, in case of monsanto, the number is 38 (this number is shown in after selecting Maine and typing monsanto in the search box:
import re
import requests
from bs4 import BeautifulSoup
url_1 = 'http://npirspublic.ceris.purdue.edu/state/state_menu.aspx?state=ME'
url_2 = 'http://npirspublic.ceris.purdue.edu/state/company.aspx'
company_name = 'monsanto'
company_number = '38'
with requests.session() as s:
r = s.get(url_1)
soup = BeautifulSoup(r.text, 'lxml')
data = {i['name']: '' for i in soup.select('input[name]')}
for i in soup.select('input[value]'):
data[i['name']] = i['value']
data['ctl00$ContentPlaceHolder1$search'] = 'company'
data['ctl00$ContentPlaceHolder1$TextBoxInput1'] = company_name
r = s.post(url_1, data=data)
soup = BeautifulSoup(r.text, 'lxml')
data = {i['name']: '' for i in soup.select('input[name]')}
for i in soup.select('input[value]'):
data[i['name']] = i['value']
data = {k: v for k, v in data.items() if not k.startswith('ctl00$ContentPlaceHolder1$')}
data['ctl00$ContentPlaceHolder1${}'.format(company_number)] = 'Display+Products'
r = s.post(url_2, data=data)
soup = BeautifulSoup(r.text, 'lxml')
for div in soup.select('.nopgbrk'):
#extract name
print(div.select_one('.ppisreportspanprodname').text)
#extract ME product number:
s = ''.join(re.findall(r'\d{10}', div.text))
print(s)
#extract alstar link
s = div.select_one('a[href*="alstar_label.aspx"]')
if s:
print(s['href'])
else:
print('No ALSTAR link')
#extract Registration year:
s = div.find(text=lambda t: 'Registration Year:' in t)
if s:
print(s.next.text)
else:
print('No registration year.')
print('-' * 80)
Prints:
PRECEPT INSECTICIDE
2014000996
No ALSTAR link
2019
--------------------------------------------------------------------------------
ACCELERON IC-609 INSECTICIDE SEED TREATMENT FOR CORN
2009005053
alstar_label.aspx?LabelId=117531
2019
--------------------------------------------------------------------------------
ACCELERON D-342 FUNGICIDE SEED TREATMENT
2015000498
alstar_label.aspx?LabelId=117538
2019
--------------------------------------------------------------------------------
ACCELERON DX-309
2009005026
alstar_label.aspx?LabelId=117559
2019
--------------------------------------------------------------------------------
... and so on.

How to automate scraping wikipedia info box specifically and print the data using python for any wiki page?

My task is to automate printing the wikipedia infobox data.As an example, I am scraping the Star Trek wikipedia page (https://en.wikipedia.org/wiki/Star_Trek) and extract infobox section from the right hand side and print them row by row on screen using python. I specifically want the info box. So far I have done this:
from bs4 import BeautifulSoup
import urllib.request
# specify the url
urlpage = 'https://en.wikipedia.org/wiki/Star_Trek'
# query the website and return the html to the variable 'page'
page = urllib.request.urlopen(urlpage)
# parse the html using beautiful soup and store in variable 'soup'
soup = BeautifulSoup(page, 'html.parser')
# find results within table
table = soup.find('table', attrs={'class': 'infobox vevent'})
results = table.find_all('tr')
print(type(results))
print('Number of results', len(results))
print(results)
This gives me everything from the info box. A snippet is shown below:
[<tr><th class="summary" colspan="2" style="text-align:center;font-
size:125%;font-weight:bold;font-style: italic; background: lavender;">
<i>Star Trek</i></th></tr>, <tr><td colspan="2" style="text-align:center">
<a class="image" href="/wiki/File:Star_Trek_TOS_logo.svg"><img alt="Star
Trek TOS logo.svg" data-file-height="132" data-file-width="560" height="59"
I want to extract the data only and print it on screen. So What i want is:
Created by Gene Roddenberry
Original work Star Trek: The Original Series
Print publications
Book(s)
List of reference books
List of technical manuals
Novel(s) List of novels
Comics List of comics
Magazine(s)
Star Trek: The Magazine
Star Trek Magazine
And so on till the end of the infobox. So basically a way of printing every row of the infobox data so I can automate it for any wiki page? (The class of infobox table of all wiki pages is 'infobox vevent' as shown in the code)
This page should help you to parse your html as a simple string without the html tags Using BeautifulSoup Extract Text without Tags
This is a code from that page, it belongs to #0605002
>>> html = """
<p>
<strong class="offender">YOB:</strong> 1987<br />
<strong class="offender">RACE:</strong> WHITE<br />
<strong class="offender">GENDER:</strong> FEMALE<br />
<strong class="offender">HEIGHT:</strong> 5'05''<br />
<strong class="offender">WEIGHT:</strong> 118<br />
<strong class="offender">EYE COLOR:</strong> GREEN<br />
<strong class="offender">HAIR COLOR:</strong> BROWN<br />
</p>
"""
>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup(html)
>>> print soup.text
YOB: 1987
RACE: WHITE
GENDER: FEMALE
HEIGHT: 5'05''
WEIGHT: 118
EYE COLOR: GREEN
HAIR COLOR: BROWN
By using beautifulsoup,you need to reformat the data as you want. use fresult = [e.text for e in result] to get each result
If you want to read a table on html you can try some code like this,though this is using pandas.
import pandas
urlpage = 'https://en.wikipedia.org/wiki/Star_Trek'
data = pandas.read_html(urlpage)[0]
null = data.isnull()
for x in range(len(data)):
first = data.iloc[x][0]
second = data.iloc[x][1] if not null.iloc[x][1] else ""
print(first,second,"\n")

xpath: need robust code for <td> tag with python lxml

I want to scrape the code and name of the table only for the below html
<div id="ctl00_cph1_divSymbols" class="cb"><table class="quotes">
<TR><TH>Code</TH><TH>Name</TH><TH style="text-align:right;">High</TH><TH style="text-align:right;">Low</TH><TH style="text-align:right;">Close</TH><TH style="text-align:right;">Volume</TH><TH style="text-align:center;" colspan=3>Change</TH><th width=40> </th></tr>
<tr class="ro" onclick="location.href='/stockquote/SGX/Z25.htm';" style="color:green;"><td>Z25</td><td>Yanlord Land Group Limited</td><td align=right>1.400</td><td align=right>1.380</td><td align=right>1.385</td><td align=right>1,244,200</td><td align="right">0.005</td><td align="center"><IMG src="/images/up.gif"></td><td align="left">0.36</td><td align="right"><img src="/images/dl.gif" width=14 height=14> <img src="/images/chart.gif" width=14 height=14></td></tr>
<tr class="re" onclick="location.href='/stockquote/SGX/Z59.htm';" style="color:green;"><td>Z59</td><td>Yoma Strategic Holdings Ltd</td><td align=right>0.5850</td><td align=right>0.5750</td><td align=right>0.5850</td><td align=right>2,312,600</td><td align="right">0.0100</td><td align="center"><IMG src="/images/up.gif"></td><td align="left">1.74</td><td align="right"><img src="/images/dl.gif" width=14 height=14> <img src="/images/chart.gif" width=14 height=14></td></tr>
<tr class="ro" onclick="location.href='/stockquote/SGX/Z74.htm';" style="color:green;"><td>Z74</td><td>Singtel</td><td align=right>3.930</td><td align=right>3.860</td><td align=right>3.910</td><td align=right>21,674,300</td><td align="right">0.040</td><td align="center"><IMG src="/images/up.gif"></td><td align="left">1.03</td><td align="right"><img src="/images/dl.gif" width=14 height=14> <img src="/images/chart.gif" width=14 height=14></td></tr>
<tr class="re" onclick="location.href='/stockquote/SGX/Z77.htm';" style="color:green;"><td>Z77</td><td>Singtel 10</td><td align=right>3.920</td><td align=right>3.860</td><td align=right>3.900</td><td align=right>69,460</td><td align="right">0.050</td><td align="center"><IMG src="/images/up.gif"></td><td align="left">1.30</td><td align="right"><img src="/images/dl.gif" width=14 height=14> <img src="/images/chart.gif" width=14 height=14></td></tr>
</table>
</div>
The desire output is
Z25,Yanlord Land Group Limited
Z59,Yoma Strategic Holdings Ltd
Z74,Singtel
Z77,Singtel 10
My python code as below:
from lxml import html
import requests
page = requests.get('http://eoddata.com/stocklist/SGX/Z.htm')
tree = html.fromstring(page.content)
tree1 = tree.xpath('//td/a[contains(#href,"/stockquote/SGX")]/text()')
tree2 = tree.xpath('//tr[#class]/td/following-sibling::td/text()')
tree1 give me code correctly but tree2 name mix with a lot of unwanted data. How to have a robust code for the desire output?
You could use td[2] to get the second td tag:
from lxml import html
import requests
page = requests.get('http://eoddata.com/stocklist/SGX/Z.htm')
tree = html.fromstring(page.content)
tree1 = tree.xpath('//td/a[contains(#href,"/stockquote/SGX")]/text()')
# tree2 = tree.xpath('//tr[#class]/td/following-sibling::td/text()')
tree2 = tree.xpath('//tr[#class and #onclick]/td[2]/text()')
print tree1, tree2
Notice that in order to avoid the bottom right table, [#class and #onclik] is used to locate the table we need.
Result:
['Z25', 'Z59', 'Z74', 'Z77'] ['Yanlord Land Group Limited', 'Yoma Strategic Holdings Ltd', 'Singtel', 'Singtel 10']

Parsing IMDB with BeautifulSoup

I've stripped the following code from IMDB's mobile site using BeautifulSoup, with Python 2.7.
I want to create a separate object for the episode number '1', title 'Winter is Coming', and IMDB score '8.9'. Can't seem to figure out how to split apart the episode number and the title.
<a class="btn-full" href="/title/tt1480055?ref_=m_ttep_ep_ep1">
<span class="text-large">
1.
<strong>
Winter Is Coming
</strong>
</span>
<br/>
<span class="mobile-sprite tiny-star">
</span>
<strong>
8.9
</strong>
17 Apr. 2011
</a>
You can use find to locate the span with the class text-large to the specific element you need.
Once you have your desired span, you can use next to grab the next line, containing the episode number and find to locate the strong containing the title
html = """
<a class="btn-full" href="/title/tt1480055?ref_=m_ttep_ep_ep1">
<span class="text-large">
1.
<strong>
Winter Is Coming
</strong>
</span>
<br/>
<span class="mobile-sprite tiny-star">
</span>
<strong>
8.9
</strong>
17 Apr. 2011
</a>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html)
span = soup.find('span', attrs={'text-large'})
ep = str(span.next).strip()
title = str(span.find('strong').text).strip()
print ep
print title
> 1.
> Winter Is Coming
Once you have each a class="btn-full", you can use the span classes to get the tags you want, the strong tag is a child of the span with the text-large class so you just need to call .strong.text on the Tag, for the span with the css class mobile-sprite tiny-star, you need to find the next strong tag as it is a sibling of the span not a child:
h = """<a class="btn-full" href="/title/tt1480055?ref_=m_ttep_ep_ep1">
<span class="text-large">
1.
<strong>
Winter Is Coming
</strong>
</span>
<br/>
<span class="mobile-sprite tiny-star">
</span>
<strong>
8.9
</strong>
17 Apr. 2011
</a>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(h)
title = soup.select_one("span.text-large").strong.text.strip()
score = soup.select_one("span.mobile-sprite.tiny-star").find_next("strong").text.strip()
print(title, score)
Which gives you:
(u'Winter Is Coming', u'8.9')
If you really want to get the episode the simplest way is to split the text once:
soup = BeautifulSoup(h)
ep, title = soup.select_one("span.text-large").text.split(None, 1)
score = soup.select_one("span.mobile-sprite.tiny-star").find_next("strong").text.strip()
print(ep, title.strip(), score)
Which will give you:
(u'1.', u'Winter Is Coming', u'8.9')
Using url html scraping with reguest and regular expression search.
import os, sys, requests
frame = ('http://www.imdb.com/title/tt1480055?ref_=m_ttep_ep_ep1')
f = requests.get(frame)
helpme = f.text
import re
result = re.findall('itemprop="name" class="">(.*?) ', helpme)
result2 = re.findall('"ratingCount">(.*?)</span>', helpme)
result3 = re.findall('"ratingValue">(.*?)</span>', helpme)
print result[0].encode('utf-8')
print result2[0]
print result3[0]
output:
Winter Is Coming
24,474
9.0

Deleting all content between brackets from a string using python

I am using beautiful soup to grab data from an html page, and when I grab the data, I am left with this:
<tr>
<td class="main rank">1</td>
<td class="main company"><a href="/colleges/williams-college/">
<img alt="" src="http://i.forbesimg.com/media/lists/colleges/williams-college_50x50.jpg">
<h3>Williams College</h3></img></a></td>
<td class="main">Massachusetts</td>
<td class="main">$61,850</td>
<td class="main">2,124</td>
</tr>
This is the beautifulsoup command I am using to get this:
html = open('collegelist.html')
test = BeautifulSoup(html)
soup = test.find_all('tr')
I now want to manipulate this text so that it outputs
1
Williams College
Massachusetts
$62,850
2,214
and I having difficulty doing so for the entire document, where I have about 700 of these entries. Any advice would be appreciated.
Just get the .text (or use get_text()) for every tr in the loop:
soup = BeautifulSoup(open('collegelist.html'))
for tr in soup.find_all('tr'):
print tr.text # or tr.get_text()
For the HTML you've provided it prints:
1
Williams College
Massachusetts
$61,850
2,124
use get_text()
soup = BeautifulSoup(html)
"".join([x.get_text() for x in soup.find_all('tr')])

Categories