Extract links from html page using BeautifulSoup - python

I need to extract some articles from the Piography website.
so from this page http://www.biography.com/people I need all the sublinks.
for example:
/people/ryan-seacrest-21095899
/people/edgar-allan-poe-9443160
but I have two problems:
1- when I am trying to a find all < a >. I couldn't find the href that I need.
import urllib2
from BeautifulSoup import BeautifulSoup
url = "http://www.biography.com/people"
text = urllib2.urlopen(url).read()
soup = BeautifulSoup(text)
divs = soup.findAll('a')
for div in divs:
print(div)
2- There are a "see more" button. so how I can take all the links for all the people in the website. not just that appear in the first page?

On site what you show, use angular and part of content generate with JS. BeautifulSoup not execute JS. You need to use http://selenium-python.readthedocs.io/ or another like instrument. Or you may pry in ajax need for you GET(or may be POST) method, and give data through him.

Related

Scraping with Python. Can't get wanted data

I am trying to scrape website, but I encountered a problem. When I try to scrape data, it looks like the html differs from what I see on google inspect and from what I get from python. I get this with http://edition.cnn.com/election/results/states/arizona/house/01 I tried to scrape election results. I used this script to check HTML part of the webpage, and I noticed that they different. There is no classes that I need, like section-wrapper.
page =requests.get('http://edition.cnn.com/election/results/states/arizona/house/01')
soup = BeautifulSoup(page.content, "lxml")
print(soup)
Anyone knows what is the problem ?
http://data.cnn.com/ELECTION/2016/AZ/county/H_d1_county.json
This site use JavaScript fetch data, you can check the url above.
You can find this url in chrome dev-tools, there are many links, check it out
Chrome >>F12>> network tab>>F5(refresh page)>>double click the .josn url>> open new tab
import requests
from bs4 import BeautifulSoup
page=requests.get('http://edition.cnn.com/election/results/states/arizona/house/01')
soup = BeautifulSoup(page.content)
#you can try all sorts of tags here I used class: "ad" and class:"ec-placeholder"
g_data = soup.find_all("div", {"class":"ec-placeholder"})
h_data = soup.find_all("div"),{"class":"ad"}
for item in g_data:print item
#print '\n'
#for item in h_data:print item

Website Scraping Specific Forms

For an extra curricular school project, I'm learning how to scrape a website. As you can see by the code below, I am able to scrape a form called, 'elqFormRow' off of one page.
How would one go about scraping all occurrences of the 'elqFormRow' on the whole website? I'd like to return the URL of where that form was located into a list, but am running into trouble while doing so because I don't know how lol.
import bs4 as bs
import urllib.request
sauce = urllib.request.urlopen('http://engage.hpe.com/Template_NGN_Convert_EG-SW_Combined_TEALIUM-RegPage').read()
soup = bs.BeautifulSoup(sauce, 'lxml')
for div in soup.find_all('div', class_='elqFormRow'):
print(div.text.strip())
You can grab the URLs from a page and follow them to (presumably) scrape the whole site. Something like this, which will require a little massaging depending on where you want to start and what pages you want:
import bs4 as bs
import requests
domain = "engage.hpe.com"
initial_url = 'http://engage.hpe.com/Template_NGN_Convert_EG-SW_Combined_TEALIUM-RegPage'
# get urls to scrape
text = requests.get(initial_url).text
initial_soup = bs.BeautifulSoup(text, 'lxml')
tags = initial_soup.findAll('a', href=True)
urls = []
for tag in tags:
if domain in tag:
urls.append(tag['href'])
urls.append(initial_url)
print(urls)
# function to grab your info
def scrape_desired_info(url):
out = []
text = requests.get(url).text
soup = bs.BeautifulSoup(text, 'lxml')
for div in soup.find_all('div', class_='elqFormRow'):
out.append(div.text.strip())
return out
info = [scrape_desired_info(url) for url in urls if domain in url]
URLlib stinks, use requests. If you need to go multiple levels down in the site put the URL finding section in a function and call it X number of times, where X is the number of levels of links you want to traverse.
Scrape responsibly. Try not to get into a sorcerer's apprentice situation where you're hitting the site over and over in a loop, or following links external to the site. In general, I'd also not put in the question the page you want to scrape.

Extract Link URL After Specified Element with Python and Beautifulsoup4

I'm trying to extract a link from a page with python and the beautifulsoup library, but I'm stuck. The link is on the following page, on the sidebar area, directly underneath the h4 subtitle "Original Source:
http://www.eurekalert.org/pub_releases/2016-06/uonc-euc062016.php
I've managed to isolate the link (mostly), but I'm unsure of how to further advance my targeting to actually extract the link. Here's my code so far:
import requests
from bs4 import BeautifulSoup
url = "http://www.eurekalert.org/pub_releases/2016-06/uonc-euc062016.php"
data = requests.get(url)
soup = BeautifulSoup(data.text, 'lxml')
source_url = soup.find('section', class_='widget hidden-print').find('div', class_='widget-content').findAll('a')[-1]
print(source_url)
I am currently getting the full html of the last element in which I've isolated, where I'm trying to simply get the link. Of note, this is the only link on the page I'm trying to get.
You're looking for the link which is the href html attribute. source_url is a bs4.element.Tag which has the get method like:
source_url.get('href')
You almost got it!!
SOLUTION 1:
You just have to run the .text method on the soup you've assigned to source_url.
So instead of:
print(source_url)
You should use:
print(source_url.text)
Output:
http://news.unchealthcare.org/news/2016/june/e-cigarette-use-can-alter-hundreds-of-genes-involved-in-airway-immune-defense
SOLUTION 2:
You should call source_url.get('href') to get only the specific href tag related to your soup.findall element.
print source_url.get('href')
Output:
http://news.unchealthcare.org/news/2016/june/e-cigarette-use-can-alter-hundreds-of-genes-involved-in-airway-immune-defense

Can Beautiful Soup parse hidden attributes?

So I used Beautiful Soup in python to parse a page that displays all my facebook friends.Here's my code:
import requests
from bs4 import BeautifulSoup
r=requests.get("https://www.facebook.com/xxx.xxx/friendspnref=lhc")
soup=BeautifulSoup(r.content)
for link in soup.find_all("a"):
print link.get('href')
The thing is it displays a lot of links but none of them are links to my friends' profiles,which are displayed normally on the webpage.
On doing Inspect element I fount this
<div class="hidden_elem"><code id="u_0_2m"><!--
The code continues,and the links to their profiles are commented within an li tag in the div tag.
Two questions mainly:
(1.)What does this mean and why can't Beautiful Soup read them?
(2.)Is there a way to read them?
I really don't plan to achieve anything by this ,just curious.

How to use Beautiful soup to return destination from HTML anchor tags

I am using python 2 and Beautiful soup to parse HTML retrieved using the requests module
import requests
from bs4 import BeautifulSoup
site = requests.get("http://www.stackoverflow.com/")
HTML = site.text
links = BeautifulSoup(HTML).find_all('a')
Which returns a list containing output which looks like Navigate
The content of the attribute href for each anchor tag can be in several forms, for example it could be a javascript call on the page, it could be a relative address to a page with the same domain(/next/one/file.php), or it could be a specific web address (http://www.stackoverflow.com/).
Using BeautifulSoup is it possible to return the web addresses of both the relative and specific addresses to one list, excluding all javascript calls and such, leaving only navigable links?
From the BS docs:
One common task is extracting all the URLs found within a page’s <a> tags:
for link in soup.find_all('a'):
print(link.get('href'))
You can filter out the href="javascript:whatever()" cases like this:
hrefs = []
for link in soup.find_all('a'):
if link.has_key('href') and not link['href'].lower().startswith('javascript:'):
hrefs.append(link['href'])

Categories