unable to fetch full data inside<div> - python

HTML:
<div>
Está en: <b>
Inicio /
Valle Del Cauca /
Cali /
Zona Sur /
Zona Sur /
<a>Los Naranjos Conjunto Campestre</a></b>
</div>
Unable to fetch all <a> tags inside <div> tag
My code:
import requests
from bs4 import BeautifulSoup
page = requests.get('https://www.fincaraiz.com.co/oceana-52/barranquilla/proyecto-nuevo-det-1041165.aspx')
soup = BeautifulSoup(page.content, 'html.parser')
first = soup.find('div' , 'breadcrumb left')
link = first.find('div')
a_link = link.findAll('a')
print (a_link)
The above coding only printing the first <a> tag
[Inicio]
Following are the output required from the above HTML
Valle Del Cauca
Cali
Zona Sur
Zona Sur
I'm not sure why it was not printing after '/' inside <b> tag

You can use lxml parser, html.parser normalizes/prettify the actual source before BS4 parse it.
soup = BeautifulSoup(page.content, 'lxml')

Related

Python regex: re.search() does not find string

I have trouble using the re.search() method. I am trying to extract an image link from following string explicit:
div class="beitragstext">\n\t\t\t\t<p>Es gibt derzeit keine Gründe mehr NICHT auf 1.1.3 zu springen!</p>\n<p><img src="https://www.iphoneblog.de/wp-content/uploads/2008/02/372948722-6ec4028a80.jpg" alt="372948722_6ec4028a80.jpg" border="0" width="430" height="466" /></p>\n<p>Photo: factoryjoe
I want to substract the URL of the first image, and the URL only.
This is my code:
imageURLObject = re.search(r'http(?!.*http).*?\.(jpg|png|JPG|PNG)', match)
The result should be https://www.iphoneblog.de/wp-content/uploads/2008/02/372948722-6ec4028a80.jpg
Instead, the method return is None.
But if use this regex re.search(r'http.*?\.(jpg|png|JPG|PNG)', match), without the `*(?!.http), the first http hit will match until .(jpg|png|JPG|PNG) and this would be the return:
http://www.flickr.com/photos/factoryjoe/372948722/"><img src="https://www.iphoneblog.de/wp-content/uploads/2008/02/372948722-6ec4028a80.jpg
Can someone help me please ? :-)
Use Beautiful soup for HTML parsing..
https://beautiful-soup-4.readthedocs.io/en/latest/
from bs4 import BeautifulSoup
html = """
<div class="beitragstext">\n\t\t\t\t<p>Es gibt derzeit keine Gründe mehr NICHT auf 1.1.3 zu springen!</p>\n<p><img src="https://www.iphoneblog.de/wp-content/uploads/2008/02/372948722-6ec4028a80.jpg" alt="372948722_6ec4028a80.jpg" border="0" width="430" height="466" /></p>\n<p>Photo: factoryjoe
"""
soup = BeautifulSoup(html, 'lxml')
links = soup.find_all('div', {'class': 'beitragstext'})
for i in links:
print(i.find('img')['src'])
>>> https://www.iphoneblog.de/wp-content/uploads/2008/02/372948722-6ec4028a80.jpg

Scrape information within <a href and <span

I would need to scrape journalists' names and journals from this website:
https://www.politicasufacebook.it/giornalisti/
What I am looking for is to get specifically <a href information (journalist's name) and < span (newspaper's name).
For example, Andrea Scanzi:
Andrea Scanzi
and Il Fatto Quotidiano
<span style="font-size:13px;line-height:25px"> Il Fatto Quotidiano</span>
I have wrote the following
with requests.Session() as s: # use session object for efficiency of tcp re-use
s.headers = {'User-Agent': 'Mozilla/5.0'}
r = s.get('https://www.politicasufacebook.it/giornalisti/')
soup = bs(r.content, 'lxml')
but I do not know how to continue in order to extract such information.
You can use soup.find_all with the desired tag and attributes.
import requests
from bs4 import BeautifulSoup
r = requests.get('https://www.politicasufacebook.it/giornalisti/')
soup = BeautifulSoup(r.content, 'lxml')
journalists = soup.find_all('a', {'style': 'color:#003060', 'target': '_blank'})
newspapers = soup.find_all('span', {'style': 'font-size:13px;line-height:25px'})
for i, v in enumerate(journalists):
print(v.text.strip() + ' - ' + newspapers[i].text.strip())
Output:
Roberto Saviano - La Repubblica
Marco Travaglio - Il Fatto Quotidiano
Enrico Mentana - La7
Andrea Scanzi - Il Fatto Quotidiano
Massimo Gramellini - Corriere Della Sera
Nicola Porro - Rete 4
Salvo Sottile - Rai1
Carmelo Abbate - Storie Nere
Gad Lerner - autonomo
Michele Serra - La Repubblica
...

Navigating the DOM tree with BeautifulSoup

I'm scraping a website for prices of listings, and am not figuring out how to navigate the tree structure.
In the best of worlds I would have a for loop to iterate over all the lis and do some data analysis, hence I would love to have an iterator iterate over the specific elements that are nested way down.
I tried to call nested elements à la .div.div. I think I'm just new to this, some lines of help would be greatly appreciated!
myurl = 'https://www.2ememain.be/l/velos-velomoteurs/q/velo/'
uClient = uReq(myurl)
page_html = uClient.read()
uClient.close()
page_soup = soup(page_html, "lxml")
containers = page_soup.findAll(
"li", {"class": "mp-Listing mp-Listing--list-item"})
Here is the tree structure:
<li class="mp-Listing mp-Listing--list-item">
<figure class="mp-Listing-image-container"><a
data-tracking="mucLxVHX8FbvYBHPHfGkOCRq9VFszDlhSxgIClJUJRXbTYMnnOw8kI1NFuitzMperXfQZoyyS2Mx8VbGSZB7_jITV8iJZErGmgWsWp4Arvmpog9Hw3EO8q45U-6chavRHHXbOGPOeNci_683vlir1_SAK-XDa7Znjl22XHOxxH_n3QwloxZSRCxAKGjVYg8aQGTfUgZd2b9DDBdUR2fqyUEUXqnMGZ5hjKlTKTR67obF26tTc8kc1HAsv_fvTEfJW-UxpJCuVhXjKi3pcuL99F8QesdivVy1p_jhs7KL-528jJXZ-LGNSz6cloZlO3yEsAdN_NxI4vz76mTfPY-fiRuAlSPfcjP8KYuDw9e8Qz-QyhUNfhIzOZyU6r1suEfcihY9w_HYY-Qn6vmZ8Bw9ZZn4CEV7odI4_7RzYe8OBw4UmTXAODFxJgS-7fnlWgUAZqX8wu_WydbQLqDqpMXEMsbzKFxaerTLhhUGBqNlBEzpJ0jBIm7-hafuMH5v3IRU0Iha8fUbu7soVLYTuTcbBG2dUgEH-O2-bALjnkMB8XWlICCM14klxeRyOAFscVKg2m6p5aanRR38dgEXuvVE9UcSjHW43JeNSv3gJ7GwJww"
href="/a/velos-velomoteurs/velos-ancetres-oldtimers/a34926285-peugeot-velo-de-course-1970.html?c=17f70af2bde4a155c6d568ce3cad9ab7&previousPage=lr">
<div class="mp-Listing-image-item mp-Listing-image-item--main"
style="background-image:url(//i.ebayimg.com/00/s/NTI1WDcwMA==/z/LlYAAOSw3Rdc-miZ/$_82.JPG)"><img
alt="Peugeot - V�lo de course - 1970" data-img-src="Peugeot - V�lo de course - 1970"
src="//i.ebayimg.com/00/s/NTI1WDcwMA==/z/LlYAAOSw3Rdc-miZ/$_82.JPG"
title="Peugeot - V�lo de course - 1970" /></div>
</a></figure>
<div class="mp-Listing-content">
<div class="mp-Listing-group mp-Listing-group--main">
<h3 class="mp-Listing-title"><a
data-tracking="mucLxVHX8FbvYBHPHfGkOCRq9VFszDlhSxgIClJUJRXbTYMnnOw8kI1NFuitzMperXfQZoyyS2Mx8VbGSZB7_jITV8iJZErGmgWsWp4Arvmpog9Hw3EO8q45U-6chavRHHXbOGPOeNci_683vlir1_SAK-XDa7Znjl22XHOxxH_n3QwloxZSRCxAKGjVYg8aQGTfUgZd2b9DDBdUR2fqyUEUXqnMGZ5hjKlTKTR67obF26tTc8kc1HAsv_fvTEfJW-UxpJCuVhXjKi3pcuL99F8QesdivVy1p_jhs7KL-528jJXZ-LGNSz6cloZlO3yEsAdN_NxI4vz76mTfPY-fiRuAlSPfcjP8KYuDw9e8Qz-QyhUNfhIzOZyU6r1suEfcihY9w_HYY-Qn6vmZ8Bw9ZZn4CEV7odI4_7RzYe8OBw4UmTXAODFxJgS-7fnlWgUAZqX8wu_WydbQLqDqpMXEMsbzKFxaerTLhhUGBqNlBEzpJ0jBIm7-hafuMH5v3IRU0Iha8fUbu7soVLYTuTcbBG2dUgEH-O2-bALjnkMB8XWlICCM14klxeRyOAFscVKg2m6p5aanRR38dgEXuvVE9UcSjHW43JeNSv3gJ7GwJww"
href="/a/velos-velomoteurs/velos-ancetres-oldtimers/a34926285-peugeot-velo-de-course-1970.html?c=17f70af2bde4a155c6d568ce3cad9ab7&previousPage=lr">Peugeot
- V�lo de course - 1970</a></h3>
<p class="mp-Listing-description mp-text-paragraph">Cet objet est vendu par Catawiki. Cliquez sur le lien
pour �tre redirig� vers le site Catawiki et placer votre ench�re.v�lo de cou<span><input
class="mp-Listing-show-more" id="a34926285" type="checkbox" /><span
class="mp-Listing-description mp-Listing-description--extended">rse peugeot des ann�es 70,
�quip� de pneus neufs (michelin dynamic sport), freins Mafac racer, d�railleur allvit, 3
plateaux, 21 vitesses.selle Basano</span><label for="a34926285">...<span
class="mp-Icon mp-Icon--xs mp-svg-arrow-down"></span><span
class="mp-Icon mp-Icon--xs mp-svg-arrow-up"></span></label></span></p>
<div class="mp-Listing-attributes"></div>
</div>
<div class="mp-Listing-group mp-Listing-group--aside">
<div class="mp-Listing-group mp-Listing-group--top-block"><span
class="mp-Listing-price mp-text-price-label">Voir description</span><span
class="mp-Listing-seller-name"><a class="mp-TextLink"
href="/u/catawiki/38096837/">Catawiki</a></span><span
class="mp-Listing-date">Aujourd'hui</span><span class="mp-Listing-location">Toute la
Belgique<br /></span></div>
<div class="mp-Listing-group mp-Listing-group--bottom-block"><span class="mp-Listing-priority">Annonce au
top</span><span class="mp-Listing-seller-link"><a class="mp-TextLink undefined"
href="https://admarkt.2dehands.be/buyside/url/RK-f5Gyr8TS9VKWPn06TDHk8zCWeSU5-PsQDuvr5tYpoRXQYzjmhI4E8OX9dXcZb0TEQOFSDMueu3s5kqHSihdgWdlYIhSdweDBq0ckhYm7kU8NzKSx7FWvKA8-ZSJUz6PW439SHCTDUa2er4_kqge-fyr8zJemRXzISpFdvVIzVufagipJY-9jozmgnesM_bfBJxR6r0IvKWR8GYnfgv0bPsg1Ny5CQMsw4LsI33lUP_g6cYuGIcGOeEupRpJtf1sXv11G7BTj3gZAo5fvVk35hdfr5LVSJxJYsDUOxS7pdcFtkVO-0EEbZwLG3FlDYaPqLnComuKbmrSwzIW6EwfWXvr1lvifS5cOPflPSsVE319HKQ06w2vk4-4N9-E-cSXye9Yj_YHhNCJdEynvHV0XWkMkdLE_flG421UIIHVbDZdKHV429Ka7HQQSdpbyU6nQ94UsVzRfi2gEgXM18WuI96qkT8oFtqZwGrrE4wlyLuDJnPWkzaYmEwsSoPslrkv_mY66yEOLYsLolpTF3aTRU3sqv0GvZwnPkR04uZJY8GeL70uz3XaP5mYPxKz-pmCFbnJN_i9oiA_LjEIrEzSmvCEM_jViUfPB4FIib7VEi_gag5qWNYYxfkIyT4mC9Y0EKx0JbNHzyBs1062ETCiFvtPaAgconmyqW2ztnw4it_D10qAEemDppNOXKMmX_Jg-feuFKwq-MdIxiyJK3yoiKPXzMEEBa2WXqchDAPF52YmcVjq8HDORqYFkq5-iLumz6Y8ut-smKs_-vMG7k52nO3RW3RzuO0syMLBlZGiqUnADJtj0hmGmzqHXRqflq4QCTEE2vmG2flfMSIz9XJ7ECg73CP5OSNPg5VlzWfCVgd7o1TYd-rFBFXWM5Xz-ZlCA03LOZtP3BeQR3-TnSL6MNWo46vEtHq5ntcF-TrFTl4h01C5DNF_7R4W36CqQ4"
rel="noopener noreferrer nofollow" target="_blank">Visiter le site internet</a></span></div>
</div>
</div>
</li>
The idea is to fetch
<span class="mp-Listing-seller-name"><a class="mp-TextLink">
through referencing. Like containers.div.span....
I believe this is what you're looking for:
from bs4 import BeautifulSoup as bs
target = [your code above - note that it's missing the opening <li>]
page_soup = bs(target, "lxml")
containers = page_soup.find_all('li')
for container in containers:
item = container.find_all("span", class_= "mp-Listing-seller-name")
print(item)
Output:
[<span class="mp-Listing-seller-name"><a class="mp-TextLink" href="/u/catawiki/38096837/">Catawiki</a></span>]

Python BeautifulSoup extracting titles according to id

This is a subquestion of this one: Python associate urls's ids and url's titles in lists
I have this HTML script:
<a href="http://pluzz.francetv.fr/videos/monte_le_son_live_,101973832.html"
class="ss-titre">Monte le son</a>
<div class="rs-cell-details">
<a href="http://pluzz.francetv.fr/videos/monte_le_son_live_,101973832.html"
class="ss-titre">"Rubin_Steiner"</a>
<a href="http://pluzz.francetv.fr/videos/fare_maohi_,102103928.html"
class="ss-titre">Fare maohi</a>
How can I do to have this result with BeautifulSoup:
list_titre = [['Monte le son', 'Rubin_Steiner'], ['Fare maohi']] #one sublist by id
I tried this:
f = urllib.urlopen(url)
page = f.read()
f.close()
soup = BeautifulSoup(page)
show=[]
list_titre=[]
list_url=[]
for link in soup.findAll('a'):
lien = link.get('href')
if lien == None:
lien = ""
if "http://pluzz.francetv.fr/videos/" in lien:
titre = (link.text.strip())
if "Voir cette vidéo" in titre:
titre = ""
if "Lire la vidéo" in titre:
titre = ""
list_titre.append(titre)
list_url.append(lien)
My result is:
list_titre = ['Monte le son', 'Rubin_Steiner', 'Fare maohi']
list_url = [http://pluzz.francetv.fr/videos/monte_le_son_live_,101973832.html, http://pluzz.francetv.fr/videos/fare_maohi_,102103928.html]
But "titre" is not sorted by id.
Search for your links with a CSS selector to limit hits to just qualifying URLs.
Collect the links in a dictionary by URL; that way you can then process the information by sorting the dictionary keys:
from bs4 import BeautifulSoup
links = {}
soup = BeautifulSoup(page)
for link in soup.select('a[href^=http://pluzz.francetv.fr/videos/]'):
title = link.get_text().strip()
if title and title not in (u'Voir cette vidéo', u'Lire la vidéo'):
url = link['href']
links.setdefault(url, []).append(title)
The dict.setdefault() call sets an empty list for urls not yet encountered; this produces a dictionary with the URLs as keys, and the titles as a list of values per URL.
Demo:
>>> page = '''\
... <a href="http://pluzz.francetv.fr/videos/monte_le_son_live_,101973832.html"
... class="ss-titre">Monte le son</a>
... <div class="rs-cell-details">
... <a href="http://pluzz.francetv.fr/videos/monte_le_son_live_,101973832.html"
... class="ss-titre">"Rubin_Steiner"</a>
... <a href="http://pluzz.francetv.fr/videos/fare_maohi_,102103928.html"
... class="ss-titre">Fare maohi</a>
... '''
>>> links = {}
>>> soup = BeautifulSoup(page)
>>> for link in soup.select('a[href^=http://pluzz.francetv.fr/videos/]'):
... title = link.get_text().strip()
... if title and title not in (u'Voir cette vidéo', u'Lire la vidéo'):
... url = link['href']
... links.setdefault(url, []).append(title)
...
>>> from pprint import pprint
>>> pprint(links)
{'http://pluzz.francetv.fr/videos/ce_soir_ou_jamais_,101506826.html': [u'Ce soir (ou jamais !)',
u'"Qui est propri\xe9taire de quoi ? La propri\xe9t\xe9 mise \xe0 mal dans tous les domaines"'],
'http://pluzz.francetv.fr/videos/clip_locaux_,102890631.html': [u'Clips'],
'http://pluzz.francetv.fr/videos/fare_maohi_,102152859.html': [u'Fare maohi'],
'http://pluzz.francetv.fr/videos/fare_maohi_,102292937.html': [u'Fare maohi'],
'http://pluzz.francetv.fr/videos/fare_maohi_,102365651.html': [u'Fare maohi'],
'http://pluzz.francetv.fr/videos/inspecteur_barnaby_,101972045.html': [u'Inspecteur Barnaby',
u'"La musique en h\xe9ritage"'],
'http://pluzz.francetv.fr/videos/le_lab_o_saison3_,101215383.html': [u'Le Lab.\xd4',
u'"Episode 22"',
u'Saison 3'],
'http://pluzz.francetv.fr/videos/monsieur_madame_saison1_,101970319.html': [u'Les Monsieur Madame',
u'"Musique"',
u'Saison 1'],
'http://pluzz.francetv.fr/videos/monte_le_son_live_,101973832.html': [u'Monte le son !',
u'"Rubin Steiner"'],
'http://pluzz.francetv.fr/videos/music_explorer_saison1_,101215382.html': [u'Music Explorer : les chasseurs de sons',
u'"Episode 3/6"',
u'Saison 1'],
'http://pluzz.francetv.fr/videos/retour_a_goree_,101641108.html': [u'Retour \xe0 Gor\xe9e'],
'http://pluzz.francetv.fr/videos/singe_mi_singe_moi_,101507102.html': [u'Singe mi singe moi',
u'"Le chat"'],
'http://pluzz.francetv.fr/videos/singe_mi_singe_moi_,101777072.html': [u'Singe mi singe moi',
u'"L\'autruche"'],
'http://pluzz.francetv.fr/videos/toute_nouvelle_tendance_,102472310.html': [u'T.N.T'],
'http://pluzz.francetv.fr/videos/toute_nouvelle_tendance_,102472336.html': [u'T.N.T'],
'http://pluzz.francetv.fr/videos/toute_nouvelle_tendance_,102721018.html': [u'T.N.T'],
'http://pluzz.francetv.fr/videos/toute_nouvelle_tendance_,103216774.html': [u'T.N.T.'],
'http://pluzz.francetv.fr/videos/toute_nouvelle_tendance_,103216788.html': [u'T.N.T'],
'http://pluzz.francetv.fr/videos/via_cultura_,101959892.html': [u'Via cultura',
u'"L\'Ochju, le Mauvais oeil"']}

Search inside results object - Python, BeatifulSoup

I'm trying to get some informations in a site, put it in a list and exporting this list to csv.
This is an part of the site, it repeats several times.
<img src="image.jpg" alt="Aclimação">
</a>
</div>
Clique na imagem para ampliar
</div>
<div class="colInfos">
<h4>Aclimação</h4>
<div class="addressInfo">
Rua Muniz de Souza, 1110<br>
Aclimação - São Paulo - SP<br>
01534-001<br>
<br>
(11) 3208-3418 / 2639-0173<br>
aclimacao.sp#escolas.com.br<br>
I want to get the image link, name (h4), address(inside addressInfo, each br should be an separated item in a list) and email of each school (a href mailto:) in this site and export to s csv file. This is how I'm trying. But there is a problem, because I don't know how to search inside the results object 'endereco' How can I do this?
This is my code:
import urllib2
from BeautifulSoup import BeautifulSoup
url = urllib2.urlopen("http://www.fisk.com.br/unidades?pais=1&uf=&rg=&cid=&ba=&un=")
soup = BeautifulSoup(url)
#nomes = soup.findAll('h4')
dados = []
i = 1
for endereco in enderecos:
text = ''.join(endereco.findAll(???)) **<- how an I search the br's inside this?**
dados[i] = text.encode('utf-8').strip()
i = i +
enderecos = soup.findAll('div', attrs={'class': 'colInfos'})
It really works fine. All you have to do is replace
dados = []
i = 1
for endereco in enderecos:
text = ''.join(endereco.findAll(text=True))
dados[i] = text.encode('utf-8').strip()
i = i +
enderecos = soup.findAll('div', attrs={'class': 'colInfos'})
with
dados = []
enderecos = soup.findAll('div', attrs={'class': 'colInfos'})
for endereco in enderecos:
text = ''.join(endereco.findAll(text=True))
dados.append(text.encode('utf-8').strip())
print dados

Categories