Python - Crawl Multiple Classes within a Page Using BeautifulSoup - python

I am trying to crawl Agoda's daily hotel price of multiple room types along with additional information such as the promotion information, breakfast condition, and book-now-pay-later regulation.
The codes I have are as below:
import requests
import math
from bs4 import BeautifulSoup
url = "http://www.agoda.com/ambassador-hotel-taipei/hotel/taipei-tw.html?asq=8m91A1C3D%252bTr%252bvRSmuClW5dm5vJXWO5dlQmHx%252fdU9qxilNob5hJg0b218wml6rCgncYsXBK0nWktmYtQJCEMu0P07Y3BjaTYhdrZvavpUnmfy3moWn%252bv8f2Lfx7HovrV95j6mrlCfGou99kE%252bA0aX0aof09AStNs69qUxvAVo53D4ZTrmAxm3bVkqZJr62cU&tyra=1%257c2&searchrequestid=2e2b0e8c-cadb-465b-8dea-2222e24a1678&pingnumber=1&checkin=2015-10-01&los=1"
res = requests.get(url)
soup = BeautifulSoup(res.text, 'html.parser')
n = len(soup.select('.room-name'))
for i in range(0, n):
en_room = soup.select('.room-name')[i].text.strip()
currency = soup.select('.currency')[i].text
price = soup.select('.sellprice')[i].text
try:
sp_info = soup.select('.left-room-info')[i].text.strip()
except Exception as e:
sp_info = "N/A"
try:
pay_later = soup.select('.book-now-paylater')[i].text.strip()
except Exception as e:
pay_later = "N/A"
print en_room, i+1, currency, price, en_room, sp_info, pay_later
time.sleep(1)
I have two questions:
(1) The "left-room-info" class seems to contain two sub-classes "breakfast" and "room-promo". These sub-classes only show up when the particular room type provides such services.
When there is only one of the sub-classes shows up, the output works out well. However, when none of the sub-classes shows up, the output is empty when I expect to show "N/A". Also when both of the sub-classes show up, the output format has unnecessary empty lines which cannot be removed by .strip().
Is there any way to solve these problems?
(2) When I tried to extract information from the class '.book-now-paylater', the extracted data does not match each room type. For example, assuming there are 10 room types and only room 2, 4, 6, 8 allow travelers to book now pay later, the codes can extract exactly 4 pieces of book-now-pay-later information but these 4 pieces of information are then assigned inappropriately to room type 1, 2, 3, 4.
Is there any way to fix this problem?
Thank you for your help!
Gary

(1) This is happening because even if there is no text in the '.left-room-info' selection, it won't throw an exception, and your except will never run. You should be checking to see if the value is an empty string (''). You can do this with a simple if not string_var like this
sp_info = soup.select('.left-room-info')[i].text.strip()
if not sp_info:
sp_info = "N/A"
When both subclasses show up, you should split the string on the carriage return ('\r') and then strip each of the resulting pieces. The code would look something like this: (note that now sp_info is a list, not just a string)
sp_info = soup.select('.left-room-info')[i].text.strip().split('\r')
if len(sp_info) > 1:
sp_info = [ info.strip() for info in sp_info ]
Putting these pieces together, we'll get something like this
sp_info = soup.select('.left-room-info')[i].text.strip().split('\r')
if len(sp_info) > 1:
sp_info = [ info.strip() for info in sp_info ]
elif not sp_info[0]: # check for empty string
sp_info = ["N/A"] # keep sp_info a list for consistancy
(2) is a little more complicated. You're going to have to change how you parse the page. Namely, you're probably going to have to select on .room-type. The way you're selecting the book now pay laters, it doesn't associate them with any other elements, it just selects the 8 instances of that class. Here is how I would go about doing it:
import requests
import math
from bs4 import BeautifulSoup
url = "http://www.agoda.com/ambassador-hotel-taipei/hotel/taipei-tw.html?asq=8m91A1C3D%252bTr%252bvRSmuClW5dm5vJXWO5dlQmHx%252fdU9qxilNob5hJg0b218wml6rCgncYsXBK0nWktmYtQJCEMu0P07Y3BjaTYhdrZvavpUnmfy3moWn%252bv8f2Lfx7HovrV95j6mrlCfGou99kE%252bA0aX0aof09AStNs69qUxvAVo53D4ZTrmAxm3bVkqZJr62cU&tyra=1%257c2&searchrequestid=2e2b0e8c-cadb-465b-8dea-2222e24a1678&pingnumber=1&checkin=2015-10-01&los=1"
res = requests.get(url)
soup = BeautifulSoup(res.text)
rooms = soup.select('.room-type')[1:] # the first instance of the class isn't a room
room_list = []
for room in rooms:
room_info = {}
room_info['en_room'] = room.select('.room-name')[0].text.strip()
room_info['currency'] = room.select('.currency')[0].text.strip()
room_info['price'] = room.select('.sellprice')[0].text.strip()
sp_info = room.select('.left-room-info')[0].text.strip().split('\r')
if len(sp_info) > 1:
sp_info = ", ".join([ info.strip() for info in sp_info ])
elif not sp_info[0]: # check for empty string
sp_info = "N/A"
room_info['sp_info'] = sp_info
pay_later = room.select('.book-now-paylater')
room_info['pay_later'] = pay_later[0].text.strip() if pay_later else "N/A"
room_list.append(room_info)

In your code, you are not traversing the dom correctly. This will cause problems in scraping. (e.g. second problem). I shall give suggestive code snippet(not exact solution) hopeing you could solve the first problem by yourself.
# select all room types by tables tr tag
room_types = soup.find_all('tr', class_="room-type")
# iterate over the list to scrape data form each td or div inside tr
for room in room_types:
en_room = room.find('div', class_='room-name').text.strip()

Related

Why is for looping not looping?

Im new to programming and cannot figure out why this wont loop. It prints and converts the first item exactly how I want. But stops after the first iteration.
from bs4 import BeautifulSoup
import requests
import re
import json
url = 'http://books.toscrape.com/'
page = requests.get(url)
html = BeautifulSoup(page.content, 'html.parser')
section = html.find_all('ol', class_='row')
for books in section:
#Title Element
header_element = books.find("article", class_='product_pod')
title_element = header_element.img
title = title_element['alt']
#Price Element
price_element = books.find(class_='price_color')
price_str = str(price_element.text)
price = price_str[1:]
#Create JSON
final_results_json = {"Title":title, "Price":price}
final_result = json.dumps(final_results_json, sort_keys=True, indent=1)
print(title)
print(price)
print()
print(final_result)
First, clarify what you are looking for? Probably, you wish to print the title, price and final_result for every book that has been scraped from the URL books.toscrape.com. The code is working as it is written though the expectation is different. If you notice you are finding all the "ol" tags with class name = "row" and there's just one such element on the page thus, section has only one element eventually the for loop iterates just once.
How to debug it?
Check the type of section, type(section)
Print the section to know what it contains
write some print statements in for loop to understand what happens when
It isn't hard to debug this one.
You need to change:
section = html.find_all('li', class_='col-xs-6 col-sm-4 col-md-3 col-lg-3')
there is only 1 <ol> in that doc
I think you want
for book in section[0].find_all('li'):
ol means ordered list, of which there is one in this case, there are many li or list items in that ol

Removing new line characters in web scrape

I'm trying to scrape baseball lineup data but would only like to return the player names. However, as of right now, it is giving me - position, newline character, name, newline character, and then batting side. For example I want
'D. Fletcher'
but instead I get
'LF\nD. Fletcher\nR'
Additionally, it is giving me all players on the page. It would be preferable that I group them by team, which maybe requires a dictionary set up of some sort but am not sure what that code would look like.
I've tried using the strip function but I believe that only removes leading or trailing issues as opposed to in the middle. I've tried researching how to just get the title information from the anchor tag but have not figured out how to do that.
from bs4 import BeautifulSoup
import requests
url = 'https://www.rotowire.com/baseball/daily_lineups.htm'
r = requests.get(url)
soup = BeautifulSoup(r.text, "html.parser")
players = soup.find_all('li', {'class': 'lineup__player'})
####for link in players.find('a'):
##### print (link.string)
awayPlayers = [player.text.strip() for player in players]
print(awayPlayers)
You should only get the .text for the a tag, not the whole li:
awayPlayers = [player.find('a').text.strip() for player in players]
That would result in something like the following:
['L. Martin', 'Jose Ramirez', 'J. Luplow', 'C. Santana', ...
Say you wanted to build that dict with team names and players you could do something like as follows. I don't know if you want the highlighted players e.g. Trevor Bauer? I have added variables to hold them in case needed.
Ad boxes and tools boxes are excluded via :not pseudo class which is passed a list of classes to ignore.
from bs4 import BeautifulSoup as bs
import requests
r = requests.get('https://www.rotowire.com/baseball/daily-lineups.php')
soup = bs(r.content, 'lxml')
team_dict = {}
teams = [item.text for item in soup.select('.lineup__abbr')] #26
matches = {}
i = 0
for teambox in soup.select('.lineups > div:not(.is-ad, .is-tools)'):
team_visit = teams[i]
team_home = teams[i + 1]
highlights = teambox.select('.lineup__player-highlight-name a')
visit_highlight = highlights[0].text
home_highlight = highlights[1].text
match = team_visit + ' v ' + team_home
visitors = [item['title'] for item in teambox.select('.is-visit .lineup__player [title]')]
home = [item['title'] for item in teambox.select('.is-home .lineup__player [title]')]
matches[match] = {'visitor' : [{team_visit : visitors}] ,
'home' : [{team_home : home}]
}
i+=1
Example info:
Current structure:
I think you were almost there, you just needed to tweak it a little bit:
awayPlayers = [player.find('a').text for player in players]
This list comprehension will grab just the names from the list then pull the text from the anchor...you get just a list of the names:
['L. Martin',
'Jose Ramirez',
'J. Luplow'...]
You have to find a tag and title attribute in it, check below answer.
awayPlayers = [player.find('a').get('title') for player in players]
print(awayPlayers)
Output is:
['Leonys Martin', 'Jose Ramirez', 'Jordan Luplow', 'Carlos Santana',

Remove unwanted characters from string with BeautifulSoup Python when selecting words from string

I am new to Python and still don't understand all of it and its functionality but I am getting close to what I am trying to achieve.
Essentially I have got the programme to scrape the data I want from the website but when it is printing selected words/items from the "specs" string it is also printing characters such as [ ] and '' from the string.
The example is that I am trying to just get the 'gearbox' type, 'fuel' type and 'mileage' from a list of li's which i have converted to a string with the plant to then select the specific item from that string.
What I am getting with the current programme is this:
['Manual']['Petrol']['86,863 miles']
What I would like to achieve is a printed result like this:
Manual, Petrol, 86,863 miles
Which when exported to separate columns in my .csv should show up in their correct columns under the appropriate headings.
I have tried .text to remove just the text but it shows up with the 'list' object has no attribute 'text' error.
import csv
import requests
from bs4 import BeautifulSoup
outfile = open('pistonheads.csv','w', newline='')
writer = csv.writer(outfile)
writer.writerow(["Link", "Make", "Model", "Price", "Image Link",
"Gearbox", "Fuel", "Mileage"])
url = 'https://www.pistonheads.com/classifieds?Category=used- cars&Page=1&ResultsPerPage=100'
get_url = requests.get(url)
get_text = get_url.text
soup = BeautifulSoup(get_text, 'html.parser')
car_link = soup.find_all('div', 'listing-headline', 'price')
for div in car_link:
links = div.findAll('a')
for a in links:
link = ("https://www.pistonheads.com" + a['href'])
make = (a['href'].split('/')[-4])
model = (a['href'].split('/')[-3])
price = a.find('span').text.rstrip()
image_link = a.parent.parent.find('img')['src']
image = ("https:") + image_link
vehicle_details = a.parent.parent.find('ul', class_='specs')
specs = list(vehicle_details.stripped_strings)
gearbox = specs[3:]
fuel = specs[1:2]
mileage = specs[0:1]
writer.writerow([link, make, model, price, image, gearbox, fuel, mileage])
print(link, make, model, price, image, gearbox, fuel, mileage)
outfile.close()
Welcome to StackOverflow!
So there's a lot to improve from your script. You are getting there!
specs = list(vehicle_details.stripped_strings) is a generator resolved into a list. Effectively, you get to access by index the things you want. For example, mileage can simply be specs[0].
The issue that you get extra [ and ] is caused by your use of slicing mileage = specs[0:1]. From the documentation, indexing returns an item, slicing returns a new list. See lists introduction.
(Optional) And finally, to get all those information in a single line, you can do multiple assignments from the specs list. See multiple assignments.
mileage, fuel, _, gearbox = specs
Bonus tip When in doubt, use pdb.
mileage = specs[0]
import pdb; pdb.set_trace() # temp set on one line so you can remove it easily after
# now you can interactively inspect your code
(Pdb) specs
Good luck! And enjoy Python!
if you want to get the string from the list maybe you can do this
gearbox = specs[3:][0] if specs[3:] else '-'
fuel = specs[1:2][0] if specs[1:2] else '-'
mileage = specs[0:1][0] if specs[0:1] else '-'
but this way or aldnav answer will give false result even throw an error
ValueError: not enough values to unpack
Usually I will extract parent container first, not select the child (a) then go to the parent.
# helper to get dynamic specs element
def getSpec(element, selector):
spec = element.select_one(selector)
return spec.nextSibling.string.strip() if spec else '-'
soup = BeautifulSoup(get_text, 'html.parser')
results = soup.find_all('div', class_="result-contain")
for car in results:
a = car.find('a')
if not a:
continue
link = ("https://www.pistonheads.com" + a['href'])
make = (a['href'].split('/')[-4])
model = (a['href'].split('/')[-3])
price = a.find('span').text.rstrip()
image_link = car.find('img')['src']
image = ("https:") + image_link
if not car.find('ul', class_='specs'):
gearbox = fuel = mileage = '-'
else:
gearbox = getSpec(car, '.location-pin-4')
fuel = getSpec(car, '.gas-1')
mileage = getSpec(car, '.gauge-1')
print(gearbox, fuel, mileage)
writer.writerow([link, make, model, price, image, gearbox, fuel, mileage])
#print(link, make, model, price, image, gearbox, fuel, mileage)
outfile.close()

Python extract and append data into data frame

I've scraped the website for my research but I couldn't find the right way to extract it into data frame. I believe that my problem is related with list objects that are between lines 36 and 38.
The print line has worked very nice that I can see the final version of data frame in the Python console.
The solution can be really easy but I couldn't figure it out. Thanks in advance for all help.
from time import sleep
from bs4 import BeautifulSoup, SoupStrainer
import requests
import pandas as pd
# Insert the hisghest page number for website
highest_number = 12
def total_page_number(url):
all_webpage_links = []
all_webpage_links.insert(0, url)
pages = [str(each_number) for each_number in range(2, highest_number)]
for page in pages:
link = ''.join(url + '&page=' + page)
all_webpage_links.append(link)
return all_webpage_links
# Use total_page_number function to create page list for website
All_page = total_page_number(
'https://www.imdb.com/search/title?countries=tr&languages=tr&locations=Turkey&count=250&view=simple')
def clean_text(text):
""" Removes white-spaces before, after, and between characters
:param text: the string to remove clean
:return: a "cleaned" string with no more than one white space between
characters
"""
return ' '.join(text.split())
# Create list objects for data
# Problem occurs in this line !!!!!!
actor_names = []
titles = []
dates = []
def get_cast_from_link(movie_link):
""" Go to the IMDb Movie page in link, and find the cast overview list.
Prints tab-separated movie_title, actor_name, and character_played to
stdout as a result. Nothing returned
:param movie_link: string of the link to IMDb movie page (http://imdb.com
...)
:return: void
"""
movie_page = requests.get(movie_link)
# Use SoupStrainer to strain the cast_list table from the movie_page
# This can save some time in bigger scraping projects
cast_strainer = SoupStrainer('table', class_='cast_list')
movie_soup = BeautifulSoup(movie_page.content, 'html.parser', parse_only=cast_strainer)
# Iterate through rows and extract the name and character
# Remember that some rows might not be a row of interest (e.g., a blank
# row for spacing the layout). Therefore, we need to use a try-except
# block to make sure we capture only the rows we want, without python
# complaining.
for row in movie_soup.find_all('tr'):
try:
actor = clean_text(row.find(itemprop='name').text)
actor_names.append(actor)
titles.append(movie_title)
dates.append(movie_date)
print('\t'.join([movie_title, actor, movie_date]))
except AttributeError:
pass
# Export data frame
# Problem occurs in this line !!!!!!
tsd_df = pd.DataFrame({'Actor_Names': actor_names,
'Movie_Title': titles,
'Movie_Date': dates})
tsd_df.to_csv('/Users/ea/Desktop/movie_df.tsv', encoding='utf-8')
for each in All_page:
# Use requests.get('url') to load the page you want
web_page = requests.get(each)
# https://www.imdb.com/search/title?countries=tr&languages=tr&count=250&view=simple&page=2
# Prepare the SoupStrainer to strain just the tbody containing the list of movies
list_strainer = SoupStrainer('div', class_='lister-list')
# Parse the html content of the web page with BeautifulSoup
soup = BeautifulSoup(web_page.content, 'html.parser', parse_only=list_strainer)
# Generate a list of the "Rank & Title" column of each row and iterate
movie_list = soup.find_all('span', class_='lister-item-header')
for movie in movie_list:
movie_title = movie.a.text
movie_date = movie.find('span', class_='lister-item-year text-muted unbold').text
# get the link to the movie's own IMDb page, and jump over
link = 'http://imdb.com' + movie.a.get('href')
get_cast_from_link(link)
# remember to be nice, and sleep a while between requests!
sleep(15)

python - How would i scrape this website for specific data that's constantly changing/being updated?

the website is:
https://pokemongo.gamepress.gg/best-attackers-type
my code is as follows, for now:
from bs4 import BeautifulSoup
import requests
import re
site = 'https://pokemongo.gamepress.gg/best-attackers-type'
page_data = requests.get(site, headers=headers)
soup = BeautifulSoup(page_data.text, 'html.parser')
check_gamepress = soup.body.findAll(text=re.compile("Strength"))
print(check_gamepress)
However, I really want to scrape certain data, and I'm really having trouble.
For example, how would I scrape the portion that show's the following for best Bug type:
"Good typing and lightning-fast attacks. Though cool-looking, Scizor is somewhat fragile."
This information could obviously be updated as it has been in the past, when a better Pokemon comes out for that type. So, how would I scrape this data where it'll probably be updated in the future, without me having to make code changes when that occurs.
In advance, thank you for reading!
This particular site is a bit tough because of how the HTML is organized. The relevant tags containing the information don't really have many distinguishing features, so we have to get a little clever. To make matters complicated, the divs the contain the information across the whole page are siblings. We'll also have to make up for this web-design travesty with some ingenuity.
I did notice a pattern that is (almost entirely) consistent throughout the page. Each 'type' and underlying section are broken into 3 divs:
A div containing the type and pokemon, for example Dark Type: Tyranitar.
A div containing the 'specialty' and moves.
A div containing the 'ratings' and commentary.
The basic idea that follows here is that we can begin to organize this markup chaos through a procedure that loosely goes like this:
Identify each of the type title divs
For each of those divs, get the other two divs by accessing its siblings
Parse the information out of each of those divs
With this in mind, I produced a working solution. The meat of the code consists of 5 functions. One to find each section, one to extract the siblings, and three functions to parse each of those divs.
import re
import json
import requests
from pprint import pprint
from bs4 import BeautifulSoup
def type_section(tag):
"""Find the tags that has the move type and pokemon name"""
pattern = r"[A-z]{3,} Type: [A-z]{3,}"
# if all these things are true, it should be the right tag
return all((tag.name == 'div',
len(tag.get('class', '')) == 1,
'field__item' in tag.get('class', []),
re.findall(pattern, tag.text),
))
def parse_type_pokemon(tag):
"""Parse out the move type and pokemon from the tag text"""
s = tag.text.strip()
poke_type, pokemon = s.split(' Type: ')
return {'type': poke_type, 'pokemon': pokemon}
def parse_speciality(tag):
"""Parse the tag containing the speciality and moves"""
table = tag.find('table')
rows = table.find_all('tr')
speciality_row, fast_row, charge_row = rows
speciality_types = []
for anchor in speciality_row.find_all('a'):
# Each type 'badge' has a href with the type name at the end
href = anchor.get('href')
speciality_types.append(href.split('#')[-1])
fast_move = fast_row.find('td').text
charge_move = charge_row.find('td').text
return {'speciality': speciality_types,
'fast_move': fast_move,
'charge_move': charge_move}
def parse_rating(tag):
"""Parse the tag containing categorical ratings and commentary"""
table = tag.find('table')
category_tags = table.find_all('th')
strength_tag, meta_tag, future_tag = category_tags
str_rating = strength_tag.parent.find('td').text.strip()
meta_rating = meta_tag.parent.find('td').text.strip()
future_rating = meta_tag.parent.find('td').text.strip()
blurb_tags = table.find_all('td', {'colspan': '2'})
if blurb_tags:
# `if` to accomodate fire section bug
str_blurb_tag, meta_blurb_tag, future_blurb_tag = blurb_tags
str_blurb = str_blurb_tag.text.strip()
meta_blurb = meta_blurb_tag.text.strip()
future_blurb = future_blurb_tag.text.strip()
else:
str_blurb = None;meta_blurb=None;future_blurb=None
return {'strength': {
'rating': str_rating,
'commentary': str_blurb},
'meta': {
'rating': meta_rating,
'commentary': meta_blurb},
'future': {
'rating': future_rating,
'commentary': future_blurb}
}
def extract_divs(tag):
"""
Get the divs containing the moves/ratings
determined based on sibling position from the type tag
"""
_, speciality_div, _, rating_div, *_ = tag.next_siblings
return speciality_div, rating_div
def main():
"""All together now"""
url = 'https://pokemongo.gamepress.gg/best-attackers-type'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'lxml')
types = {}
for type_tag in soup.find_all(type_section):
type_info = {}
type_info.update(parse_type_pokemon(type_tag))
speciality_div, rating_div = extract_divs(type_tag)
type_info.update(parse_speciality(speciality_div))
type_info.update(parse_rating(rating_div))
type_ = type_info.get('type')
types[type_] = type_info
pprint(types) # We did it
with open('pokemon.json', 'w') as outfile:
json.dump(types, outfile)
There is, for now, one small wrench in the whole thing. Remember when I said this pattern was almost entirely consistent? Well, the Fire type is an odd-ball here, because they included two pokemon for that type, so the Fire type results are not correct. I or some brave person may come up with a way to deal with that. Or maybe they'll decide on one fire pokemon in the future.
This code, the resulting json (prettified), and an archive of HTML response used can be found in this gist.

Categories