I'm trying to scrape finviz(https://finviz.com/quote.ashx?t=aapl) for marketcap in the fundamental table but couldn't for the life of me locate the table or the class with beautiful soup. It seems that every time I use soup.find_all() it works for 'div', 'td', 'table' etc. but it returns an empty list when I try to add a class like {'class':'snapshot-td2'}. Does anyone know how I may fix this?
import requests
from bs4 import BeautifulSoup
import bs4
def parseMarketCap():
response = requests.get("https://finviz.com/quote.ashx?t=aapl")
soup = bs4.BeautifulSoup(response.content, "lxml")
table = soup.find_all('td', {'class':'snapshot-td2'})
print(table)
I also tried the following for the table, but no luck as well:
table = soup.find_all('table', {'class': "snapshot-table2"})
inspect
fundamental table
You need an user-agent header, then you can use -soup-contains: to target the preceding td by its text then move to the desired value field:
import requests
from bs4 import BeautifulSoup as bs
r = requests.get('https://finviz.com/quote.ashx?t=aapl', headers = {'User-Agent':'Mozilla/5.0'})
soup = bs(r.content, 'lxml')
print(soup.select_one('td:-soup-contains("Market Cap")').find_next('b').text)
as QHarr suggest that you to add user-agent for proper response
response = requests.get("https://finviz.com/quote.ashx?t=aapl",headers={'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.116 Safari/537.36'})
soup = BeautifulSoup(response.text, "lxml")
content_table = soup.find('table', {'class':'snapshot-table2'})
tabel_rows = content_table.find_all('tr')
market_cap = tabel_rows[1].find_all('td')[1].text
print(market_cap)
Try to use User-Agent for your request like this:
user_agent = {'User Agent':'YOUR_USER_AGENT'}
r = requests.get('YOURURL', headers=user_agent)
...
Related
I'm trying to get all the reviews from a specific hotel page in booking.com
I have tried this code but I'm not getting anything printed at all.
This is the code I tried:
import urllib.request
from bs4 import BeautifulSoup
url='https://www.booking.com/hotel/sa/sarwat-park.ar.html?aid=304142&label=gen173nr-1DCAEoggI46AdIM1gEaMQBiAEBmAERuAEHyAEM2AED6AEBiAIBqAIDuAL_oY-aBsACAdICJDE5YzYxY2ZiLWRlYjUtNDRjNC04Njk0LTlhYWY4MDkzYzNhNNgCBOACAQ&sid=c7009aac67195c0a7ef9aa63f6537581&dest_id=6376991;dest_type=hotel;dist=0;group_adults=2;group_children=0;hapos=1;hpos=1;no_rooms=1;req_adults=2;req_children=0;room1=A%2CA;sb_price_type=total;sr_order=popularity;srepoch=1665388865;srpvid=1219386046550156;type=total;ucfs=1&#tab-reviews'
req = urllib.request.Request(
url,
headers={
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.47 Safari/537.36',
}
)
f = urllib.request.urlopen(req)
soup = BeautifulSoup(f.read().decode('utf-8'), 'html.parser')
reviews = soup.findAll("li", {"class": "review_item clearfix "})
for review in reviews:
print(review.find("div", {"class": "review_item_header_content"}).text)
To begin with, there is no class "review_item" on the entire page.
A better approach would be to just use the etree to find and get details from the xPath of the reviews list that you have now.
//*[#id="b2hotelPage"]/div[25]/div/div/div/div[1]/div[2]/div/ul
Then you could do something like
webpage = req.get(URL, headers=headers)
soup = bs(webpage.content, "html.parser")
dom = etree.HTML(str(soup))
listTarget = dom.xpath('//*[#id="b2hotelPage"]/div[25]/div/div/div/div[1]/div[2]/div/ul')
This should give you a list of lxml objects which are essentially your comment cards.
Then you can work on them in a similar fashion
I am trying to get the restaurant name and address of each restaurant from this platform:
https://customers.dlivery.live/en/list
So far I tried with BeautifulSoup
import requests
from bs4 import BeautifulSoup
import json
url = 'https://customers.dlivery.live/en/list'
headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) '\
'AppleWebKit/537.36 (KHTML, like Gecko) '\
'Chrome/75.0.3770.80 Safari/537.36'}
response = requests.get(url,headers=headers)
soup = BeautifulSoup(response.text, "html.parser")
soup
I noticed that within soup there is not the data about the restaurants.
How can I do this?
if you inspect element the page, you will notice that the names are wrapped in the card_heading class, and the addresses are wrapped in card_distance class.
soup = BeautifulSoup(response.text, 'html.parser')
restaurantAddress = soup.find_all(class_='card_distance')
for address in restaurantAddress:
print(address.text)
and
soup = BeautifulSoup(response.text, 'html.parser')
restaurantNames = soup.find_all(class_='card_heading')
for name in restaurantNames:
print(name.text)
Not sure if this exact code will work, but this is pretty close to what you are looking for.
I want to get all the images within a div, but everytime I try the output returns 'none' or just an empyt list. The issue just seems to happens when I try to scrape between a div, or a class. Even using different user-agents, .find or .find_all .
from bs4 import BeautifulSoup
import requests
abcde = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.64 Safari/537.36'}
r = requests.get('https://www.gettyimages.com.br/fotos/randon', headers=abcde)
soup = BeautifulSoup(r.content, 'html.parser')
check = soup.find_all('img', class_="GalleryItems-module__searchContent___DbMmK"})
print(check)
Would recommend to work with an api, while there is on https://developers.gettyimages.com/docs/
To answer your question concerning just images - Classes are not the best identifier cause often they are dynamic, also there is a gallery(fixed) and a mosaic view.
Simply select the <article> and its child <img> to get your goal:
import requests
from bs4 import BeautifulSoup
r = requests.get('https://www.gettyimages.com.br/fotos/randon?assettype=image&sort=mostpopular&phrase=randon',
headers = {'User-Agent': 'Mozilla/5.0'}
)
soup = BeautifulSoup(r.text)
for e in soup.select('article img'):
print(e.get('src'))
Output
https://media.gettyimages.com/photos/randon-norway-picture-id974597088?k=20&m=974597088&s=612x612&w=0&h=EIwbJNzCld1tbU7rTyt42pie2yCEk5z4e6L6Z4kWhdo=
https://media.gettyimages.com/photos/caption-patrick-roanhouse-a-266-member-chats-about-some-software-on-picture-id97112678?k=20&m=97112678&s=612x612&w=0&h=zmwqIlVv2f-M9Vz_qcpITPzj-3SON99G3P69h69J5Gs=
https://media.gettyimages.com/photos/12th-and-f-streets-nw-washington-dc-pedestrians-teofila-randon-left-picture-id97102402?k=20&m=97102402&s=612x612&w=0&h=potzNUgMo3gKab5eS_pwyNggS2YGn6sCnDQYxdGUHqc=
https://media.gettyimages.com/photos/randon-perdue-kari-barnhart-attend-the-other-nashville-society-one-picture-id969787702?k=20&m=969787702&s=612x612&w=0&h=kcaYmOKruLb57Vqga68xvEZB1V12wSPPYkC6GdvXO18=
https://media.gettyimages.com/photos/death-of-duguesclin-to-chateauneuf-de-randon-july-13-1380-during-the-picture-id959538894?k=20&m=959538894&s=612x612&w=0&h=lx3DHDSf3kBc_h-O2kjR2D6UYDjPPvhn8xJ_KM0cmMc=
https://media.gettyimages.com/photos/ski-de-randone-a-saintefoy-au-dessus-du-couloir-de-la-croix-savoie-mr-picture-id945817638?k=20&m=945817638&s=612x612&w=0&h=fRd3M2KCa5dd0z8ePnPw2IkAKhXYJpuCFuUTz7jpVPU=
...
so I tried to get a specific text from a website but it only gives me the error (floor = soup.find('span', {'class': 'text-white fs-14px text-truncate attribute-value'}).text
AttributeError: 'NoneType' object has no attribute 'text')
I specifically want to get the 'Floor Price' text.
My code:
import bs4
from bs4 import BeautifulSoup
#target url
url = "https://magiceden.io/marketplace/solsamo"
#act like browser
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
response = requests.get('https://magiceden.io/marketplace/solsamo')
#parse the downloaded page
soup = BeautifulSoup(response.content, 'lxml')
floor = soup.find('span', {'class': 'text-white fs-14px text-truncate attribute-value'}).text
print(floor)
There is no needed data in HTML you receive after:
response = requests.get('https://magiceden.io/marketplace/solsamo')
You can make sure of this by looking at page source code:
view-source:https://magiceden.io/marketplace/solsamo
You should use Selenium instead requests to get your data or you can examine XHR-requests on this page, maybe you can get this data using requests by following other link.
I'm new to beautifulsoup module and I have a problem. My code is simple. Before all, the site I'm trying to scrape from is this
and I am trying to scrape the price. (The big number two (2) with more of it)
My code:
import urllib
from bs4 import BeautifulSoup
quote_page = 'https://www.bloomberg.com/quote/SPX:IND'
page = urllib.request.urlopen(quote_page)
soup = BeautifulSoup(page, 'html.parser')
price_box = soup.find('div', attr = {'class': 'price'})
price = price_box.text
print(price)
The error I get:
price = price_box.text
AttributeError: 'NoneType' object has no attribute 'text'
I have used a more robust CSS Selector instead of the find methods. Since there is only one div element with class price, I am guessing this is the right element.
import requests
from bs4 import BeautifulSoup
response = requests.get('https://www.bloomberg.com/quote/SPX:IND')
soup = BeautifulSoup(response.content, 'lxml')
price = soup.select_one('.price').text
print(price)
Another solution:
from bs4 import BeautifulSoup
from requests import Session
session = Session()
session.headers['user-agent'] = (
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) '
'AppleWebKit/537.36 (KHTML, like Gecko) Chrome/'
'66.0.3359.181 Safari/537.36'
)
quote_page = 'https://www.bloomberg.com/quote/SPX:IND'
page= session.get(quote_page)
soup = BeautifulSoup(page.text, 'html.parser')
price_box = soup.find('meta', itemprop="price")
price = float(price_box['content'])
print(price)