I want to scrape name, url and description of companies as listed on google finance. So far I am successful in getting description and url but unable to fetch the name. In the source code of myUrl, name is 024 Pharma Inc. When I see the div, the class is named 'appbar-snippet-primary'. But still the code doesn't find it. I ma new to web scraping so may be I am missing something. Please guide me in this regard.
from bs4 import BeautifulSoup
import urllib
import csv
myUrl = 'https://www.google.com/finance?q=OTCMKTS%3AEEIG'
r = urllib.urlopen(myUrl).read()
soup = BeautifulSoup(r, 'html.parser')
name_box = soup.find('div', class_='appbar-snippet-primary') # !! This div is not found
#name = name_box.text
#print name
description = soup.find('div', class_='companySummary')
desc = description.text.strip()
#print desc
website = soup.find('div', class_='item')
site = website.text
#print site
from bs4 import BeautifulSoup
import requests
myUrl = 'https://www.google.com/finance?q=OTCMKTS%3AEEIG'
r = requests.get(myUrl).content
soup = BeautifulSoup(r, 'html.parser')
name = soup.find('title').text.split(':')[0] # !! This div is not found
#print name
description = soup.find('div', class_='companySummary')
desc = description.text.strip()
#print desc
website = soup.find('div', class_='item')
site = website.text
write soup.find_all() instead of soup.find()
Related
I'm new to Python and need some help. I am trying to scrape the image urls from this site but can't seems to do so. I pull up all the html. Here is my code.
import requests
import pandas as pd
import urllib.parse
from bs4 import BeautifulSoup
import csv
baseurl = ('https://www.thewhiskyexchange.com/')
productlinks = []
for x in range(1,4):
r = requests.get(f'https://www.thewhiskyexchange.com/c/316/campbeltown-single-malt-scotch-whisky?pg={x}')
soup = BeautifulSoup(r.content, 'html.parser')
tag = soup.find_all('ul',{'class':'product-grid__list'})
for items in tag:
for link in items.find_all('a', href=True):
productlinks.append(baseurl + link['href'])
#print(len(productlinks))
for items in productlinks:
r = requests.get(items)
soup = BeautifulSoup(r.content, 'html.parser')
name = soup.find('h1', class_='product-main__name').text.strip()
price = soup.find('p', class_='product-action__price').text.strip()
imgurl = soup.find('div', class_='product-main__image-container')
print(imgurl)
And here is the piece of HTML I am trying to scrape from.
<div class="product-card__image-container"><img src="https://img.thewhiskyexchange.com/480/gstob.non1.jpg" alt="Glen Scotia Double Cask Sherry Finish" class="product-card__image" loading="lazy" width="3" height="4">
I would appreicate any help. Thanks
You need to first select the image then get the src attribute.
Try this:
imgurl = soup.find('div', class_='product-main__image-container').find('img')['src']
I'm not sure if I fully understand what output you are looking for. But if you just want the img source URLs, this might work:
# imgurl = soup.find('div', class_='product-main__image-container')
imgurl = soup.find('img', class_='product-main__image')
imgurl_attribute = imgurl['src']
print(imgurl_attribute[:5])
#https://img.thewhiskyexchange.com/900/gstob.non1.jpg
#https://img.thewhiskyexchange.com/900/gstob.15yov1.jpg
#https://img.thewhiskyexchange.com/900/gstob.18yov1.jpg
#https://img.thewhiskyexchange.com/900/gstob.25yo.jpg
#https://img.thewhiskyexchange.com/900/sets_gst1.jpg
I have a script which scrapes a website for the name, region and province of companies in Spain. There is another link within the html, which takes you to a page that contains the phone number, but when I try to even scrape the html, it prints "none". Is there a way that the script can automatically move to the page, scrape the number and match it with the company row?
import requests
from googlesearch import search
from bs4 import BeautifulSoup
for page in range(1,65):
url = "https://www.expansion.com/empresas-de/ganaderia/granjas-en-general/{page}.html".format(page =page)
page = requests.get(url)
soup = BeautifulSoup(page.content, "html.parser")
lists = soup.select("div#simulacion_tabla ul")
#scrape the list
for lis in lists:
title = lis.find('li', class_="col1").text
location = lis.find('li', class_="col2").text
province = lis.find('li', class_="col3").text
link = lis.find('href', class_ ="col1")
info = [title, location, province, link]
print(info)
Alternatively, is there is a way to do it with googlesearch library?
Many thanks
first url "https://www.expansion.com/empresas-de/ganaderia/granjas-en-general/index.html" not
"https://www.expansion.com/empresas-de/ganaderia/granjas-en-general/1.html"
for this reason your script does not return output.
you can try like this
import requests
# from googlesearch import search
from bs4 import BeautifulSoup
baseurl = ["https://www.expansion.com/empresas-de/ganaderia/granjas-en-general/index.html"]
urls = [f'https://www.expansion.com/empresas-de/ganaderia/granjas-en-general/{i}.html'.format(i) for i in range(2,5)]
allurls = baseurl + urls
print(allurls)
for url in allurls:
page = requests.get(url)
soup = BeautifulSoup(page.content, "html.parser")
lists = soup.select("div#simulacion_tabla ul")
#scrape the list
for lis in lists:
title = lis.find('li', class_="col1").text
location = lis.find('li', class_="col2").text
province = lis.find('li', class_="col3").text
link = lis.select("li.col1 a")[0]['href']
info = [title, location, province, link]
print(info)
I'm practicing web scraping with BeautifulSoup but I struggle to finish printing a dictionary including the items I've scraped
The web targeted can be any telegram public channel (web version) and I pretend to collect and add as part of the dictionary the text message, timestamp, views and image url (if exist attached to the post).
I've inspected the code for the 4 elements but the one related to the image url has no class or span, so I've ended scraping them it via regex. The other 3 elements are easily retrievable.
Let's go by parts:
Importing modules
from bs4 import BeautifulSoup
import requests
import re
Function to get the images url from the public channel
def pictures(url):
r = requests.get(url)
soup = BeautifulSoup(r.text, 'lxml')
link = str(soup.find_all('a', class_ = 'tgme_widget_message_photo_wrap')) #converted to str in order to be able to apply regex
image_url = re.findall(r"https://cdn4.*.*.jpg", link)
return image_url
Soup to get the text message, timestamp and views
picture_list = pictures(url)
url = "https://t.me/s/computer_science_and_programming"
channel = requests.get(url).text
soup = BeautifulSoup(channel, 'lxml')
tgpost = soup.find_all('div', class_ ='tgme_widget_message')
full_message = {}
for content in tgpost:
full_message['views'] = content.find('span', class_ = 'tgme_widget_message_views').text
full_message['timestamp'] = content.find('time', class_ = 'time').text
full_message['text'] = content.find('div', class_ = 'tgme_widget_message_text').text
print(full_message)
I would really appreciate if someone can help me, I'm new to Python and I don't know how I could do it to
Check if the post contains an image and if so, add it to the dictionary
Print the dictionary including image_url as key and the url as value for each post.
Thank you very much
I think you want something like this.
from bs4 import BeautifulSoup
import requests, re
url = "https://t.me/s/computer_science_and_programming"
channel = requests.get(url).text
soup = BeautifulSoup(channel, 'lxml')
tgpost = soup.find_all('div', class_ ='tgme_widget_message')
full_message = {}
for content in tgpost:
full_message['views'] = content.find('span', class_ = 'tgme_widget_message_views').text
full_message['timestamp'] = content.find('time', class_ = 'time').text
full_message['text'] = content.find('div', class_ = 'tgme_widget_message_text').text
if content.find('a', class_ = 'tgme_widget_message_photo_wrap') != None :
link = str(content.find('a', class_ = 'tgme_widget_message_photo_wrap'))
full_message['url_image'] = re.findall(r"https://cdn4.*.*.jpg", link)[0]
elif 'url_image' in full_message:
full_message.pop('url_image')
print(full_message)
i try to scrape a website. But i failed to extract the description of each item. Here is my code:
from bs4 import BeautifulSoup
import requests
url = "http://engine.ddtc.co.id/putusan-pengadilan-pajak"
response = requests.get(url)
data = response.text
soup = BeautifulSoup(data, 'html.parser')
puts =soup.find_all("div",{"class":"p3-search-item"})
for put in puts:
title = put.find("div", {"class":"p3-title"}).text
cat = put.find("div", {"class":"p3-category"}).text
date = put.find("div", {"class":"search-result-item-meta"}).text
link = put.find("a").get("href")
put_response = requests.get(link)
put_data = put_response.text
put_soup = BeautifulSoup(put_data, "html.parser")
put_description = put_soup.find("div",{"id": "modal-contents-pp"}).text
print("Judul Putusan:", title, "\nKategori:", cat, "\nTanggal:", date, "\nLink:", link, "\nDescription:", put_description)
So i failed to extract the description.
The description only show blank and few words. The full description can be shown if we click each item's link.
Really appreciate any help.
I think you need to change the put_description field:
from bs4 import BeautifulSoup
import requests
url = "http://engine.ddtc.co.id/putusan-pengadilan-pajak"
response = requests.get(url)
data = response.text
soup = BeautifulSoup(data, 'html.parser')
puts =soup.find_all("div",{"class":"p3-search-item"})
for put in puts:
title = put.find("div", {"class":"p3-title"}).text
cat = put.find("div", {"class":"p3-category"}).text
date = put.find("div", {"class":"search-result-item-meta"}).text
link = put.find("a").get("href")
put_response = requests.get(link)
put_data = put_response.text
put_soup = BeautifulSoup(put_data, "html.parser")
put_description = put_soup.find("div",{"class": "p3-desc"}).text
print("Judul Putusan:", title, "\nKategori:", cat, "\nTanggal:", date, "\nLink:", link, "\nDescription:", put_description)
I have the following code:
import requests
from bs4 import BeautifulSoup
import urllib.request
import urllib.parse
import re
market = 'INDU:IND'
quote_page = 'http://www.bloomberg.com/quote/' + market
page = urllib.request.urlopen(quote_page)
soup = BeautifulSoup(page, 'html.parser')
name_box = soup.find('h1', attrs={'class': 'name'})
name = name_box.text.strip()
print('Market: ' + name)
This code works and lets me get the market name from the url. I'm trying to do something similar to this website. Here is my code:
market = 'BTC-GBP'
quote_page = 'https://uk.finance.yahoo.com/quote/' + market
page = urllib.request.urlopen(quote_page)
soup = BeautifulSoup(page, 'html.parser')
name_box = soup.find('span', attrs={'class': 'Trsdu(0.3s) Fw(b) Fz(36px) Mb(-4px) D(ib)'})
name = name_box.text.strip()
print('Market: ' + name)
I'm not sure what to do. I want to retrieve the current rate, the amount it's increased/decreased by as a number & a percentage. And finally as of when the information was updated. How do I do this, I don't mind if you do a different method to the one I used previously as long as you explain it. If my code is inefficient/unpythonic could you also tell me what to do to fix this. I'm pretty new to web scraping and these new modules. Thanks!
You can use BeautifulSoup and when searching for the desired data, use regex to match the dynamic span classnames generated by the site's backend script:
from bs4 import BeautifulSoup as soup
import requests
import re
data = requests.get('https://uk.finance.yahoo.com/quote/BTC-GBP').text
s = soup(data, 'lxml')
d = [i.text for i in s.find_all('span', {'class':re.compile('Trsdu\(0\.\d+s\) Trsdu\(0\.\d+s\) Fw\(\w+\) Fz\(\d+px\) Mb\(-\d+px\) D\(\w+\)|Trsdu\(0\.\d+s\) Fw\(\d+\) Fz\(\d+px\) C\(\$data\w+\)')})]
date_published = re.findall('As of\s+\d+:\d+PM GMT\.|As of\s+\d+:\d+AM GMT\.', data)
final_results = dict(zip(['current', 'change', 'published'], d+date_published))
Output:
{'current': u'6,785.02', 'change': u'-202.99 (-2.90%)', 'published': u'As of 3:55PM GMT.'}
Edit: given the new URL, you need to change the span classname:
data = requests.get('https://uk.finance.yahoo.com/quote/AAPL?p=AAPL').text
final_results = dict(zip(['current', 'change', 'published'], [i.text for i in soup(data, 'lxml').find_all('span', {'class':re.compile('Trsdu\(0\.\d+s\) Trsdu\(0\.\d+s\) Fw\(b\) Fz\(\d+px\) Mb\(-\d+px\) D\(b\)|Trsdu\(0\.\d+s\) Fw\(\d+\) Fz\(\d+px\) C\(\$data\w+\)')})] + re.findall('At close:\s+\d:\d+PM EST', data)))
Output:
{'current': u'175.50', 'change': u'+3.00 (+1.74%)', 'published': u'At close: 4:00PM EST'}
You can directly use api provided by yahoo Finance,
For reference check this answer :-
Yahoo finance webservice API