How to extract URL from an HTML - python

I'm a newbie in web scraping. I do as below
from urllib.request import urlopen
from bs4 import BeautifulSoup
import re
html = urlopen("http://chgk.tvigra.ru/letopis/?2016/2016_spr#27mar")
soup = BeautifulSoup(html, "html.parser")
res = soup.find_all('a', {'href': re.compile("r'\b?20\b'")})
print (res)
and get
[]
My goal is this fragment
<script language="javascript" type="text/javascript">
cont = new Array();
count = new Array();
for (i=1979; i <=2015; i++){count[i]=0};
cont[1979] = "<li><a href='?1979_1#24jan'>24 января</a>" +
..............
cont[2016] = "<li><a href='?2016/2016_spr#cur'>Весенняя серия</a>" +
"<li><a href='?2016/2016_sum#cur'>Летняя серия</a>" +
"<li><a href='?2016/2016_aut#cur'>Осенняя серия</a>" +
"<li><a href='?2016/2016_win#cur'>Зимняя серия</a>";
And i try to get the result like this
'?2016/2016_spr#cur'
'?2016/2016_sum#cur'
'?2016/2016_aut#cur'
'?2016/2016_win#cur'
From 2000 to this moment (so '20' in "r'\b?20\b'" is for this reason). Can you help me, please?

Preliminaries:
>>> import requests
>>> import bs4
>>> page = requests.get('http://chgk.tvigra.ru/letopis/?2016/2016_spr#27mar').content
>>> soup = bs4.BeautifulSoup(page, 'lxml')
Having done this it might seem that the most straightforward way of identifying the script element might be to use this:
>>> scripts = soup.findAll('script', text=bs4.re.compile('cont = new Array();'))
However, scripts proves to be an empty list. (I don't know why.)
The basic approach works, if I choose a different target within the script but it would appear the it's unsafe to depend on the exact formatting of the content of Javascript script element.
>>> scripts = soup.find_all(string=bs4.re.compile('i=1979'))
>>> len(scripts)
1
Still, this might be good enough for you. Please just notice that the script has the change function at the end to be discarded.
A safer approach might be to look for the containing table element, then the second td element within that and finally the script within that.
>>> table = soup.find_all('table', class_='common_table')
>>> tds = table[0].findAll('td')[1]
>>> script = tds.find('script')
Again, you will need to discard function change.

You can use get('attribute') and then filter the results if needed:
from urllib.request import urlopen
from bs4 import BeautifulSoup
html = urlopen("http://chgk.tvigra.ru/letopis/?2016/2016_spr#27mar")
soup = BeautifulSoup(html, "html.parser")
res = [link.get('href') for link in soup.find_all('a')]
print (res)

Related

Using multiple for loop with Python Using Beautiful Soup

from bs4 import BeautifulSoup as bs
import requests
import pandas as pd
url = "https://www.property24.com/for-sale/woodland-hills-wildlife-estate/bloemfontein/free-state/10467/109825373"
data = requests.get(url)
soup = bs(data.content,"html.parser")
The code below are a test with to get 1 item.
property_overview = soup.find(class_="p24_regularListing").find(class_="p24_propertyOverview").find(class_='p24_propertyOverviewRow').find(class_='col-xs-6 p24_propertyOverviewKey').text
property_overview
Output : 'Listing Number'
The code below is what we have to get all the col-xs-6 p24_propertyOverviewKey
p24_regularListing_items = soup.find_all(class_="p24_regularListing")
for p24_propertyOverview_item in p24_regularListing_items:
p24_propertyOverview_items = p24_propertyOverview_item.find_all(class_="p24_propertyOverview")
for p24_propertyOverviewRow_item in p24_propertyOverview_items:
p24_propertyOverviewRow_items = p24_propertyOverviewRow_item.find_all(class_="p24_propertyOverviewRow")
for p24_propertyOverviewKey_item in p24_propertyOverviewRow_items:
p24_propertyOverviewKey_items = p24_propertyOverviewKey_item.find_all(class_="col-xs-6 p24_propertyOverviewKey")
p24_propertyOverviewKey_items
The code above only outputs 1 item. and not all
To put things more simply, you can use soup.select() (and via the comments, you can then use .get_text() to extract the text from each tag).
from bs4 import BeautifulSoup
import requests
resp = requests.get(
"https://www.property24.com/for-sale/woodland-hills-wildlife-estate/bloemfontein/free-state/10467/109825373"
)
resp.raise_for_status()
soup = BeautifulSoup(resp.content, "html.parser")
texts = []
for tag in soup.select(
# NB: this selector uses Python's implicit string concatenation
# to split it onto several lines.
".p24_regularListing "
".p24_propertyOverview "
".p24_propertyOverviewRow "
".p24_propertyOverviewKey"
):
texts.append(tag.get_text())
print(texts)

How to scrape data from interactive chart using python?

I have a next link which represent an exact graph I want to scrape: https://index.minfin.com.ua/ua/economy/index/svg.php?indType=1&fromYear=2010&acc=1
I'm simply can't understand is it a xml or svg graph and how to scrape data. I think I need to use bs4, requests but don't know the way to do that.
Anyone could help?
You will load HTML like this:
import requests
url = "https://index.minfin.com.ua/ua/economy/index/svg.php?indType=1&fromYear=2010&acc=1"
resp = requests.get(url)
data = resp.text
Then you will create a BeatifulSoup object with this HTML.
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, features="html.parser")
After this, it is usually very subjective how to parse out what you want. The candidate codes may vary a lot. This is how I did it:
Using BeautifulSoup, I parsed all "rect"s and check if "onmouseover" exists in that rect.
rects = soup.svg.find_all("rect")
yx_points = []
for rect in rects:
if rect.has_attr("onmouseover"):
text = rect["onmouseover"]
x_start_index = text.index("'") + 1
y_finish_index = text[x_start_index:].index("'") + x_start_index
yx = text[x_start_index:y_finish_index].split()
print(text[x_start_index:y_finish_index])
yx_points.append(yx)
As you can see from the image below, I scraped onmouseover= part and get those 02.2015 155,1 parts.
Here, this is how yx_points looks like now:
[['12.2009', '100,0'], ['01.2010', '101,8'], ['02.2010', '103,7'], ...]
from bs4 import BeautifulSoup
import requests
import re
#First get all the text from the url.
url="https://index.minfin.com.ua/ua/economy/index/svg.php?indType=1&fromYear=2010&acc=1"
response = requests.get(url)
html = response.text
#Find all the tags in which the data is stored.
soup = BeautifulSoup(html, 'lxml')
texts = soup.findAll("rect")
final = []
for each in texts:
names = each.get('onmouseover')
try:
q = re.findall(r"'(.*?)'", names)
final.append(q[0])
except Exception as e:
print(e)
#The details are appended to the final variable

Building a python web scraper, Need help to get correct output

I was building a web-scraper using python.
The purpose of my scraper is to fetch all the links to websites from this webpage http://www.ebizmba.com/articles/torrent-websites
I want output like -
www.thepiratebay.se
www.kat.ph
I am new to python and scraping, and I was doing this just for practice. Please help me to get the right output.
My code --------------------------------------
import requests
from bs4 import BeautifulSoup
r = requests.get("http://www.ebizmba.com/articles/torrent-websites")
soup = BeautifulSoup(r.content, "html.parser")
data = soup.find_all("div", {"class:", "main-container-2"})
for item in data:
print(item.contents[1].find_all("a"))
My Output --- http://i.stack.imgur.com/Xi37B.png
If you are webscraping for practice, have a look at regular expressions.
This here would get just the headline links... The Needle string is the match string, the brackets (http://.*?) contain the match group.
import urllib2
import re
myURL = "http://www.ebizmba.com/articles/torrent-websites"
req = urllib2.Request(myURL)
Needle1 = '<p><a href="(http:.*?)" rel="nofollow" target="_blank">'
for match in re.finditer(Needle1, urllib2.urlopen(req).read()):
print(match.group(1))
Use .get('href') like this:
import requests
from bs4 import BeautifulSoup
r = requests.get("http://www.ebizmba.com/articles/torrent-websites")
soup = BeautifulSoup(r.text, "html.parser")
data = soup.find_all("div", {"class:", "main-container-2"})
for i in data:
for j in i.contents[1].find_all("a"):
print(j.get('href'))
Full output:
http://www.thepiratebay.se
http://siteanalytics.compete.com/thepiratebay.se
http://quantcast.com/thepiratebay.se
http://www.alexa.com/siteinfo/thepiratebay.se/
http://www.kickass.to
http://siteanalytics.compete.com/kickass.to
http://quantcast.com/kickass.to
http://www.alexa.com/siteinfo/kickass.to/
http://www.torrentz.eu
http://siteanalytics.compete.com/torrentz.eu
http://quantcast.com/torrentz.eu
http://www.alexa.com/siteinfo/torrentz.eu/
http://www.extratorrent.cc
http://siteanalytics.compete.com/extratorrent.cc
http://quantcast.com/extratorrent.cc
http://www.alexa.com/siteinfo/extratorrent.cc/
http://www.yify-torrents.com
http://siteanalytics.compete.com/yify-torrents.com
http://quantcast.com/yify-torrents.com
http://www.alexa.com/siteinfo/yify-torrents.com
http://www.bitsnoop.com
http://siteanalytics.compete.com/bitsnoop.com
http://quantcast.com/bitsnoop.com
http://www.alexa.com/siteinfo/bitsnoop.com/
http://www.isohunt.to
http://siteanalytics.compete.com/isohunt.to
http://quantcast.com/isohunt.to
http://www.alexa.com/siteinfo/isohunt.to/
http://www.sumotorrent.sx
http://siteanalytics.compete.com/sumotorrent.sx
http://quantcast.com/sumotorrent.sx
http://www.alexa.com/siteinfo/sumotorrent.sx/
http://www.torrentdownloads.me
http://siteanalytics.compete.com/torrentdownloads.me
http://quantcast.com/torrentdownloads.me
http://www.alexa.com/siteinfo/torrentdownloads.me/
http://www.eztv.it
http://siteanalytics.compete.com/eztv.it
http://quantcast.com/eztv.it
http://www.alexa.com/siteinfo/eztv.it/
http://www.rarbg.com
http://siteanalytics.compete.com/rarbg.com
http://quantcast.com/rarbg.com
http://www.alexa.com/siteinfo/rarbg.com/
http://www.1337x.org
http://siteanalytics.compete.com/1337x.org
http://quantcast.com/1337x.org
http://www.alexa.com/siteinfo/1337x.org/
http://www.torrenthound.com
http://siteanalytics.compete.com/torrenthound.com
http://quantcast.com/torrenthound.com
http://www.alexa.com/siteinfo/torrenthound.com/
https://demonoid.org/
http://siteanalytics.compete.com/demonoid.pw
http://quantcast.com/demonoid.pw
http://www.alexa.com/siteinfo/demonoid.pw/
http://www.fenopy.se
http://siteanalytics.compete.com/fenopy.se
http://quantcast.com/fenopy.se
http://www.alexa.com/siteinfo/fenopy.se/

BeautifulSoup Cannot Find Tag

I am trying to scrape this page and all of the other pages like it. I have been using BeautifulSoup (also have tried lxml but there have been installation issues). I am using the following code:
value = "http://www.presidency.ucsb.edu/ws/index.php?pid=99556"
desiredTag = "span"
r = urllib2.urlopen(value)
data = BeautifulSoup(r.read(), 'html5lib')
displayText = data.find_all(desiredTag)
print displayText
displayText = " ".join(str(displayText))
displayText = BeautifulSoup(displayText, 'html5lib')
For some reason this isn't pull back the <span class="displaytext"> and also I have tried desiredTag as p
Am I missing something?
You are definitely experiencing the differences between different parsers used by BeautifulSoup. html.parser and lxml worked for me:
data = BeautifulSoup(urllib2.urlopen(value), 'html.parser')
Proof:
>>> import urllib2
>>> from bs4 import BeautifulSoup
>>>
>>> url = "http://www.presidency.ucsb.edu/ws/index.php?pid=99556"
>>>
>>> data = BeautifulSoup(urllib2.urlopen(url), 'html.parser')
>>> data.find("span", class_="displaytext").text
u'PARTICIPANTS:Former Speaker of the House Newt Gingrich (GA);
...

Using Python to select a specific toggle_link

The big goal is to find specific house bills.
With this code I am trying to select the link: /legislation?q=%7B%22congress%22%3A%22113%22%2C%22chamber%22%3A%22House%22%7D to narrow down my search to house bills.
from bs4 import BeautifulSoup
import urllib2
soup = BeautifulSoup(urllib2.urlopen("https://beta.congress.gov/legislation"))
for link in soup.find_all('a'):
soup_links = link.get('href')
import re
r1 = re.compile(r'/legislation(\?\S+congress\S+chamber\S+House\S+)')
print r1.findall(soup_links)
When I do this I get an empty list instead of the link.
It isn't my regular express because the following works:
r2 = re.compile(r'\S+congress\S+chamber\S+House\S+')
newstring = '/legislation?q=%7B%22congress%22%3A%22113%22%2C%22chamber%22%3A%22House%22%7D'
print r2.findall(newstring)
You are re-assigning a new value to soup_links each iteration; in the end only the last href attribute is assigned.
BeautifulSoup can do the searching for you:
soup = BeautifulSoup(urllib2.urlopen("https://beta.congress.gov/legislation"))
r1 = re.compile(r'/legislation(\?\S+congress\S+chamber\S+House\S+)')
soup_links = [l['href'] for l in soup.find_all('a', href=r1)]
print soup_links
This produces the one matching link:
>>> from bs4 import BeautifulSoup
>>> import urllib2
>>> soup = BeautifulSoup(urllib2.urlopen("https://beta.congress.gov/legislation"))
>>> r1 = re.compile(r'/legislation(\?\S+congress\S+chamber\S+House\S+)')
>>> [l['href'] for l in soup.find_all('a', href=r1)]
['/legislation?q=%7B%22congress%22%3A%22113%22%2C%22chamber%22%3A%22House%22%7D']
If you only expect one link to match, use soup.find() instead of soup.find_all():
soup = BeautifulSoup(urllib2.urlopen("https://beta.congress.gov/legislation"))
r1 = re.compile(r'/legislation(\?\S+congress\S+chamber\S+House\S+)')
soup_link = soup.find('a', href=r1)
print soup_link['href']

Categories