I am trying to extract a url from a bs4.element.ResultSet. I have distilled the result set down to the following.
dirty_true_url = [<meta content="https://escapefromtarkov.gamepedia.com/AK-103_7.62x39_assault_rifle" property="og:url"/>]
Obviously, the url is present in the meta content tag, and is a 'pedia.com page.
In case it is required, this is the type of object I am handling:
type(dirty_true_url) #bs4.element.ResultSet
type(dirty_true_url[0]) #bs4.element.Tag
I have tried to obtain the url using variations of the following:
print(dirty_true_url('content'))
print(dirty_true_url[0]('content'))
print(dirty_true_url[0]('meta', {'content'}))
print(dirty_true_url[0]('meta', {'content' : ''}))
print(dirty_true_url[0]('meta', {'content' : 'https://'}))
print(dirty_true_url[0]('meta', {'content' : 'https://.*'}))
from urllib.parse import parse_qs, urlparse
parameters = parse_qs(urlparse(dirty_true_url).query)
print( parameters['https'][0] )
How can I extract the content/url from my soup?
If necessary, the following code can be used to obtain dirty_true_url:
#modules
import requests
from bs4 import BeautifulSoup
from io import StringIO
#body of code that gets the page
pseudo_temp_weapon_page_url = 'https://escapefromtarkov.gamepedia.com/index.php?title=AK-103&redirect=yes'
pseudo_temp_weapon_page = requests.get(pseudo_temp_weapon_page_url)
pseudo_temp_weapon_soup = BeautifulSoup(pseudo_temp_weapon_page.content, 'html.parser')
pseudo_temp_weapon_soup_as_string = StringIO(str(pseudo_temp_weapon_soup))
pseudo_temp_weapon_soup_parsed = BeautifulSoup\
(pseudo_temp_weapon_soup_as_string, 'html.parser')
#Snippet of html containing URL
dirty_true_url = pseudo_temp_weapon_soup_parsed.find_all(property="og:url")
You can try this :
print(dirty_true_url[0].get('content'))
You can access a tag’s attributes by treating the tag like a dictionary. docs
dirty_true_url = pseudo_temp_weapon_soup_parsed.find_all(property="og:url")
print(dirty_true_url[0]['content'])
output:
'https://escapefromtarkov.gamepedia.com/AK-103_7.62x39_assault_rifle'
Related
I need help where I can extract all the matches from 2020/2021's URLs from this [website][1] and scrape them.
I am sending a request to this link.
The section of the HTML that I want to retrieve is this part:
Here's the code that I am using:
from bs4 import BeautifulSoup
import requests
import pandas as pd
import urllib.parse
website = 'https://www.espncricinfo.com/series/ipl-2020-21-1210595/match-results'
response = requests.get(website)
soup = BeautifulSoup(response.content,'html.parser')
match_result = soup.find_all('a',{'class':'match-info-link-FIXTURES'});
soup.get('href')
url_part_1 = 'https://www.espncricinfo.com/'
url_part_2 = []
for item in match_result:
url_part_2.append(item.get('href'))
url_joined = []
for link_2 in url_part_2:
url_joined.append(urllib.parse.urljoin(url_part_1,link_2))
first_link = url_joined[0]
match_url = soup.find_all('div',{'class':'link-container border-bottom'});
soup.get('href')
url_part_3 = 'https://www.espncricinfo.com/'
url_part_4 = []
for item in match_result:
url_part_4.append(item.get('href'))
print(url_part_4)
[1]: https://www.espncricinfo.com/series/ipl-2020-21-1210595/match-results
You don't need the second item.find_all('a',{'class':'match-info-link-FIXTURES'}): call below for item in match_result: since you already have the tags with the hrefs.
You can get the href with item.get('href').
You can do:
url_part_1 = 'https://www.espncricinfo.com/'
url_part_2 = []
for item in match_result:
url_part_2.append(item.get('href'))
The result will look something like:
['/series/ipl-2020-21-1210595/delhi-capitals-vs-mumbai-indians-final-1237181/full-scorecard',
'/series/ipl-2020-21-1210595/delhi-capitals-vs-sunrisers-hyderabad-qualifier-2-1237180/full-scorecard',
'/series/ipl-2020-21-1210595/royal-challengers-bangalore-vs-sunrisers-hyderabad-eliminator-1237178/full-scorecard',
'/series/ipl-2020-21-1210595/delhi-capitals-vs-mumbai-indians-qualifier-1-1237177/full-scorecard',
'/series/ipl-2020-21-1210595/sunrisers-hyderabad-vs-mumbai-indians-56th-match-1216495/full-scorecard',
...
]
From official doc's
:
It’s very useful to search for a tag that has a certain CSS class, but the name of the CSS attribute, “class”, is a reserved word in Python. Using class as a keyword argument will give you a syntax error. As of Beautiful Soup 4.1.2, you can search by CSS class using the keyword argument class_.
Try
soup.find_all("a", class_="match-info-link-FIXTURES")
from bs4 import BeautifulSoup as bs
import requests
import pandas as pd
url = "https://www.property24.com/for-sale/woodland-hills-wildlife-estate/bloemfontein/free-state/10467/109825373"
data = requests.get(url)
soup = bs(data.content,"html.parser")
The code below are a test with to get 1 item.
property_overview = soup.find(class_="p24_regularListing").find(class_="p24_propertyOverview").find(class_='p24_propertyOverviewRow').find(class_='col-xs-6 p24_propertyOverviewKey').text
property_overview
Output : 'Listing Number'
The code below is what we have to get all the col-xs-6 p24_propertyOverviewKey
p24_regularListing_items = soup.find_all(class_="p24_regularListing")
for p24_propertyOverview_item in p24_regularListing_items:
p24_propertyOverview_items = p24_propertyOverview_item.find_all(class_="p24_propertyOverview")
for p24_propertyOverviewRow_item in p24_propertyOverview_items:
p24_propertyOverviewRow_items = p24_propertyOverviewRow_item.find_all(class_="p24_propertyOverviewRow")
for p24_propertyOverviewKey_item in p24_propertyOverviewRow_items:
p24_propertyOverviewKey_items = p24_propertyOverviewKey_item.find_all(class_="col-xs-6 p24_propertyOverviewKey")
p24_propertyOverviewKey_items
The code above only outputs 1 item. and not all
To put things more simply, you can use soup.select() (and via the comments, you can then use .get_text() to extract the text from each tag).
from bs4 import BeautifulSoup
import requests
resp = requests.get(
"https://www.property24.com/for-sale/woodland-hills-wildlife-estate/bloemfontein/free-state/10467/109825373"
)
resp.raise_for_status()
soup = BeautifulSoup(resp.content, "html.parser")
texts = []
for tag in soup.select(
# NB: this selector uses Python's implicit string concatenation
# to split it onto several lines.
".p24_regularListing "
".p24_propertyOverview "
".p24_propertyOverviewRow "
".p24_propertyOverviewKey"
):
texts.append(tag.get_text())
print(texts)
I am trying to parse a txt, example as below link.
The txt, however, is in the form of html. I am trying to get "COMPANY CONFORMED NAME" which located at the top of the file, and my function should return "Monocle Acquisition Corp".
https://www.sec.gov/Archives/edgar/data/1754170/0001571049-19-000004.txt
I have tried below:
import requests
from bs4 import BeautifulSoup
url = 'https://www.sec.gov/Archives/edgar/data/1754170/0001571049-19-000004.txt'
r = requests.get(url)
soup = BeautifulSoup(r.content, "html")
However, "soup" does not contain "COMPANY CONFORMED NAME" at all.
Can someone point me in the right direction?
The data you are looking for is not in an HTML structure so Beautiful Soup is not the best tool. The correct and fast way of searching for this data is just using a simple Regular Expression like this:
import re
import requests
url = 'https://www.sec.gov/Archives/edgar/data/1754170/0001571049-19-000004.txt'
r = requests.get(url)
text_string = r.content.decode()
name_re = re.compile("COMPANY CONFORMED NAME:[\\t]*(.+)\n")
match = name_re.search(text_string).group(1)
print(match)
the part you look like is inside a huge tag <SEC-HEADER>
you can get the whole section by using soup.find('sec-header')
but you will need to parse the section manually, something like this works, but it's some dirty job :
(view it in replit : https://repl.it/#gui3/stackoverflow-parsing-html)
import requests
from bs4 import BeautifulSoup
url = 'https://www.sec.gov/Archives/edgar/data/1754170/0001571049-19-000004.txt'
r = requests.get(url)
soup = BeautifulSoup(r.content, "html")
header = soup.find('sec-header').text
company_name = None
for line in header.split('\n'):
split = line.split(':')
if len(split) > 1 :
key = split[0]
value = split[1]
if key.strip() == 'COMPANY CONFORMED NAME':
company_name = value.strip()
break
print(company_name)
There may be some library able to parse this data better than this code
I have scraped a website that provides me with Lisbon zip-codes. With BeautifulSoup I was able to get the zip-codes within a class item. However, the zip-codes themselves are still within other classes and I have tried many things to extract all of them from there. However, except for string-manipulation, I couldn't make it work. I am new to webscraping and html, so sorry if this question is very basic..
This is my code:
from bs4 import BeautifulSoup as soup
from requests import get
url='https://worldpostalcode.com/portugal/lisboa/'
response = get(url)
print(response.text)
html_soup = soup(response.text,'lxml')
type(html_soup)
zip_codes=html_soup.find_all('div', {'class' : 'rightc'})
And this is a snippet of the result from which I would like to only extract the zip codes..
[<div class="rightc">1000-246<hr/> 1050-138<hr/> 1069-188<hr/> 1070-204<hr/> 1100-069<hr/> 1100-329<hr/> 1100-591<hr/> 1150-144<hr/> 1169-062<hr/> 1170-128<hr/> 1170-395<hr/> 1200-228<hr/> 1200-604<hr/> 1200-862<hr/> 1250-111<hr/> 1269-121<hr/> 1300-217<hr/> 1300-492<hr/> 1350-092<hr/> 1399-014<hr/> 1400-237<hr/> 1500-061<hr/> 1500-360<hr/> 1500-674<hr/> 1600-232<hr/> 1600-643<hr/> 1700-018<hr/> 1700-302<hr/> 1750-113<hr/> 1750-464<hr/> 1800-262<hr/> 1900-115<hr/> 1900-401<hr/> 1950-208<hr/> 1990-162<hr/> 1000-247<hr/> 1050-139<hr/> 1069-190<hr/> 1070-205<hr/> 1100-070<hr/> 1100-330</div>]
Your result zip_codes has the type bs4.element.ResultSet, which is a set of bs4.element.Tag. So zip_codes[0] is what you're interested in (the first tag found). Use the .text method to strip the <hr> tags. Now you have a long string of zip codes separated by spaces. Strip them out into a list somehow (two options below, option one is more pythonic and faster).
from bs4 import BeautifulSoup as soup
from requests import get
url = 'https://worldpostalcode.com/portugal/lisboa/'
response = get(url)
html_soup = soup(response.text,'lxml')
zip_codes = html_soup.find_all('div', {'class' : 'rightc'})
# option one
zips = zip_codes[0].text.split(' ')
print(zips[:8])
# option two (slower)
zips = []
for zc in zip_codes[0].childGenerator():
zips.append(zc.extract().strip())
print(zips[:8])
Output:
['1000-246', '1050-138', '1069-188', '1070-204', '1100-069', '1100-329', '1100-591', '1150-144']
['1000-246', '1050-138', '1069-188', '1070-204', '1100-069', '1100-329', '1100-591', '1150-144']
html_soup = BeautifulSoup(htmlcontent,'lxml')
type(html_soup)
zip_codes=html_soup.find_all('div', {'class' : 'rightc'})
print(zip_codes[0].text.split(' '))
you can get the text and split it.
o/p :
[u'1000-246', u'1050-138', u'1069-188', u'1070-204',.........]
Use regex to grab the codes
from bs4 import BeautifulSoup
import requests
import re
url = 'https://worldpostalcode.com/portugal/lisboa/'
res = requests.get(url)
soup = BeautifulSoup(res.content, "lxml")
element = soup.select_one('.codelist .rightc')
codes = re.findall(r"\d{4}-\d{3}",element.text)
for code in codes:
print(code)
I would suggest you to replace all the </hr>tags into some delimiter (i.e., # or $ or ,) before loading the page response as soup. Now the job will be so easy once you load it into the soup you can extract the zip codes as a list just by calling the class.
from bs4 import BeautifulSoup as soup
from requests import get
url='https://worldpostalcode.com/portugal/lisboa/'
response = get(url)
print(response.text.replace('<hr>', '#'))
html_soup = soup(response.text,'lxml')
type(html_soup)
zip_codes=html_soup.find_all('div', {'class' : 'rightc'})
zip_codes = zip_codes.text.split('#')
Hope this helps! Cheers!
P.S.: Answer is open for improvements and comments.
I was building a web-scraper using python.
The purpose of my scraper is to fetch all the links to websites from this webpage http://www.ebizmba.com/articles/torrent-websites
I want output like -
www.thepiratebay.se
www.kat.ph
I am new to python and scraping, and I was doing this just for practice. Please help me to get the right output.
My code --------------------------------------
import requests
from bs4 import BeautifulSoup
r = requests.get("http://www.ebizmba.com/articles/torrent-websites")
soup = BeautifulSoup(r.content, "html.parser")
data = soup.find_all("div", {"class:", "main-container-2"})
for item in data:
print(item.contents[1].find_all("a"))
My Output --- http://i.stack.imgur.com/Xi37B.png
If you are webscraping for practice, have a look at regular expressions.
This here would get just the headline links... The Needle string is the match string, the brackets (http://.*?) contain the match group.
import urllib2
import re
myURL = "http://www.ebizmba.com/articles/torrent-websites"
req = urllib2.Request(myURL)
Needle1 = '<p><a href="(http:.*?)" rel="nofollow" target="_blank">'
for match in re.finditer(Needle1, urllib2.urlopen(req).read()):
print(match.group(1))
Use .get('href') like this:
import requests
from bs4 import BeautifulSoup
r = requests.get("http://www.ebizmba.com/articles/torrent-websites")
soup = BeautifulSoup(r.text, "html.parser")
data = soup.find_all("div", {"class:", "main-container-2"})
for i in data:
for j in i.contents[1].find_all("a"):
print(j.get('href'))
Full output:
http://www.thepiratebay.se
http://siteanalytics.compete.com/thepiratebay.se
http://quantcast.com/thepiratebay.se
http://www.alexa.com/siteinfo/thepiratebay.se/
http://www.kickass.to
http://siteanalytics.compete.com/kickass.to
http://quantcast.com/kickass.to
http://www.alexa.com/siteinfo/kickass.to/
http://www.torrentz.eu
http://siteanalytics.compete.com/torrentz.eu
http://quantcast.com/torrentz.eu
http://www.alexa.com/siteinfo/torrentz.eu/
http://www.extratorrent.cc
http://siteanalytics.compete.com/extratorrent.cc
http://quantcast.com/extratorrent.cc
http://www.alexa.com/siteinfo/extratorrent.cc/
http://www.yify-torrents.com
http://siteanalytics.compete.com/yify-torrents.com
http://quantcast.com/yify-torrents.com
http://www.alexa.com/siteinfo/yify-torrents.com
http://www.bitsnoop.com
http://siteanalytics.compete.com/bitsnoop.com
http://quantcast.com/bitsnoop.com
http://www.alexa.com/siteinfo/bitsnoop.com/
http://www.isohunt.to
http://siteanalytics.compete.com/isohunt.to
http://quantcast.com/isohunt.to
http://www.alexa.com/siteinfo/isohunt.to/
http://www.sumotorrent.sx
http://siteanalytics.compete.com/sumotorrent.sx
http://quantcast.com/sumotorrent.sx
http://www.alexa.com/siteinfo/sumotorrent.sx/
http://www.torrentdownloads.me
http://siteanalytics.compete.com/torrentdownloads.me
http://quantcast.com/torrentdownloads.me
http://www.alexa.com/siteinfo/torrentdownloads.me/
http://www.eztv.it
http://siteanalytics.compete.com/eztv.it
http://quantcast.com/eztv.it
http://www.alexa.com/siteinfo/eztv.it/
http://www.rarbg.com
http://siteanalytics.compete.com/rarbg.com
http://quantcast.com/rarbg.com
http://www.alexa.com/siteinfo/rarbg.com/
http://www.1337x.org
http://siteanalytics.compete.com/1337x.org
http://quantcast.com/1337x.org
http://www.alexa.com/siteinfo/1337x.org/
http://www.torrenthound.com
http://siteanalytics.compete.com/torrenthound.com
http://quantcast.com/torrenthound.com
http://www.alexa.com/siteinfo/torrenthound.com/
https://demonoid.org/
http://siteanalytics.compete.com/demonoid.pw
http://quantcast.com/demonoid.pw
http://www.alexa.com/siteinfo/demonoid.pw/
http://www.fenopy.se
http://siteanalytics.compete.com/fenopy.se
http://quantcast.com/fenopy.se
http://www.alexa.com/siteinfo/fenopy.se/