I am learning how to use beautifulsoup. I managed to parse the html and now I want to extract a list of links from the page. The problem is that I am only interested in some links and the only way I can think of is to take all the links after a certain word appears. Can I drop part of the soup before I start extracting? Thank you.
This is what I have:
# import libraries
import urllib2
from bs4 import BeautifulSoup
import pandas as pd
import os
import re
# specify the url
quote_page = 'https://econpapers.repec.org/RAS/pab7.htm'
# query the website and return the html to the variable page
page = urllib2.urlopen(quote_page)
# parse the html using beautiful soup and store in variable soup
soup = BeautifulSoup(page, 'html.parser')
print(soup)
#transform to pandas dataframe
pages1 = soup.find_all('li', )
print(pages1)
pages2 = pd.DataFrame({
"papers": pages1,
})
print(pages2)
And I need to drop the upper half of the links in page2 and the only way to differenciate the ones I want from the rest is a word that appears in the html, that is this line "<h2 class="colored">Journal Articles</h2>"
EDIT: I just noticed that I can also separate them by the begining of the link. I only want the ones that start with "/article/"
As well using css_selector:
# parse the html using beautiful soup and store in variable soup
soup = BeautifulSoup(page, 'lxml')
#print(BeautifulSoup.prettify(soup))
css_selector = 'a[href^="/article"]'
href_tag_list = soup.select(css_selector)
print("Href list size:", len(href_tag_list)) # check that you found datas, do if else if needed
href_link_list = [] #use urljoin probably needed at some point
for href_tag in href_tag_list:
href_link_list.append(href_tag['href'])
print("href:", href_tag['href'])
I used this reference web page which was provided by another stackflow user:
Web Link
NB: You will have to take off the list the "/article/".
There can be various ways to get all the href starting with "/article/". One of the simple ways to do this would be :
# import libraries
import urllib.request
from bs4 import BeautifulSoup
import os
import re
import ssl
# specify the url
quote_page = 'https://econpapers.repec.org/RAS/pab7.htm'
gcontext = ssl.SSLContext()
# query the website and return the html to the variable page
page = urllib.request.urlopen(quote_page, context=gcontext)
# parse the html using beautiful soup and store in variable soup
soup = BeautifulSoup(page, 'html.parser')
#print(soup)
# Anchor tags starting with "/article/"
anchor_tags = soup.find_all('a', href=re.compile("/article/"))
for link in anchor_tags:
print(link.get('href'))
This answer would be helpful as well. And, go through the quick start guide of BeautifulSoup, it has a very good and elaborative examples.
Related
I am trying to parse this page "https://www.imdb.com/title/tt0068112/?ref_=fn_al_tt_1", but I can't find the href that I need (href="/title/tt0068112/episodes?ref_=tt_eps_sm").
I tried with this code:
url="https://www.imdb.com/title/tt0068112/?ref_=fn_al_tt_1"
page(requests.get(url)
soup=BeautifulSoup(page.content,"html.parser")
for a in soup.find_all('a'):
print(a['href'])
What's wrong with this? I also tried to check "manually" with print(soup.prettify()) but it seems that that link is hidden or something like that.
You can get the page html with requests, the href item is in there, no need for special apis. I tried this and it worked:
import requests
from bs4 import BeautifulSoup
page = requests.get("https://www.imdb.com/title/tt0068112/?ref_=fn_al_tt_1")
soup = BeautifulSoup(page.content, "html.parser")
scooby_link = ""
for item in soup.findAll("a", href="/title/tt0068112/episodes?ref_=tt_eps_sm"):
print(item["href"])
scooby_link = "https://www.imdb.com" + "/title/tt0068112/episodes?ref_=tt_eps_sm"
print(scooby_link)
I'm assuming you also wanted to save the link to a variable for further scraping so I did that as well. 🙂
To get the link with Episodes you can use next example:
import requests
from bs4 import BeautifulSoup
url = "https://www.imdb.com/title/tt0068112/?ref_=fn_al_tt_1"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
print(soup.select_one("a:-soup-contains(Episodes)")["href"])
Prints:
/title/tt0068112/episodes?ref_=tt_eps_sm
from bs4 import BeautifulSoup
import requests
from urllib.request import urlopen
url = f'https://www.apple.com/kr/search/youtube?src=globalnav'
response = requests.get(url)
html = response.text
soup = BeautifulSoup(html, 'html.parser')
links = soup.select(".rf-serp-productname-list")
print(links)
I want to crawl through all links of shown apps. When I searched for a keyword, I thought links = soup.select(".rf-serp-productname-list") would work, but links list is empty.
What should I do?
Just check this code, I think is what you want:
import re
import requests
from bs4 import BeautifulSoup
pages = set()
def get_links(page_url):
global pages
pattern = re.compile("^(/)")
html = requests.get(f"your_URL{page_url}").text # fstrings require Python 3.6+
soup = BeautifulSoup(html, "html.parser")
for link in soup.find_all("a", href=pattern):
if "href" in link.attrs:
if link.attrs["href"] not in pages:
new_page = link.attrs["href"]
print(new_page)
pages.add(new_page)
get_links(new_page)
get_links("")
Source:
https://gist.github.com/AO8/f721b6736c8a4805e99e377e72d3edbf
You can change the part:
for link in soup.find_all("a", href=pattern):
#do something
To check for a keyword I think
You are cooking a soup so first at all taste it and check if everything you expect contains in it.
ResultSet of your selection is empty cause structure in response differs a bit from your expected one from the developer tools.
To get the list of links select more specific:
links = [a.get('href') for a in soup.select('a.icon')]
Output:
['https://apps.apple.com/kr/app/youtube/id544007664', 'https://apps.apple.com/kr/app/%EC%BF%A0%ED%8C%A1%ED%94%8C%EB%A0%88%EC%9D%B4/id1536885649', 'https://apps.apple.com/kr/app/youtube-music/id1017492454', 'https://apps.apple.com/kr/app/instagram/id389801252', 'https://apps.apple.com/kr/app/youtube-kids/id936971630', 'https://apps.apple.com/kr/app/youtube-studio/id888530356', 'https://apps.apple.com/kr/app/google-chrome/id535886823', 'https://apps.apple.com/kr/app/tiktok-%ED%8B%B1%ED%86%A1/id1235601864', 'https://apps.apple.com/kr/app/google/id284815942']
Hey i just wanted to test Python Webscraping and i have no Idea why this doesn't work.
As output i become [] and nothing else.
Has anyone an Idea? BEcause if i go to the Website and search for the element i find it.
from bs4 import BeautifulSoup
import requests
html_text = requests.get("https://osu.ppy.sh/users/20488254").text
soup = BeautifulSoup(html_text, "lxml")
job = soup.find("div", class_ = "profile-detail__col profile-detail__col--bottom-right")
print(job)
Player info is loaded dynamically with JS. So, you can't scrape dynamic content using plain bs4. Luckily, they provide user info in json format inside script tag. If you open page source and look for json-user you will see there is a tag:
<script id="json-user" type="application/json">
{"avatar_url":"https:\/\/a.ppy.sh\/20488254?1622470835.jpeg","country_code":"AT","default_group":"default","id":20488254,...
</script>
You can grab json inside that tag and get any information about player. Here is how it would look like:
import json
import requests
from bs4 import BeautifulSoup
html_text = requests.get("https://osu.ppy.sh/users/20488254").text
soup = BeautifulSoup(html_text, "lxml")
json_data = json.loads(soup.find('script', {'id':'json-user'}).string)
Now let's say you are looking for player's global rank. All you need to do is to find the correct keys to navigate you there:
player_rank = json_data['statistics']['global_rank']
# -> 199303
i just started programming.
I have the task to extract data from a HTML page to Excel.
Using Python 3.7.
My Problem is, that i have a website, whith more urls inside.
Behind these urls again more urls.
I need the data behind the third url.
My first Problem would be, how i can dictate the programm to choose only specific links from an ul rather then every ul on the page?
from bs4 import BeautifulSoup
import urllib
import requests
import re
page = urllib.request.urlopen("file").read()
soup = BeautifulSoup(page, "html.parser")
print(soup.prettify())
for link in soup.find_all("a", href=re.compile("katalog_")):
links= link.get("href")
if "katalog" in links:
for link in soup.find_all("a", href=re.compile("alle_")):
links = link.get("href")
print(soup.get_text())
There are many ways, one is to use "find_all" and try to be specific on the tags like "a" just like you did. If that's the only option, then use regular expression with your output. You can refer to this thread: Python BeautifulSoup Extract specific URLs. Also please show us either the link, or html structure of the links you want to extract. We would like to see the differences between the URLs.
PS: Sorry I can't make comments because of <50 reputation or I would have.
Updated answer based on understanding:
from bs4 import BeautifulSoup
import urllib
import requests
page = urllib.request.urlopen("https://www.bsi.bund.de/DE/Themen/ITGrundschutz/ITGrundschutzKompendium/itgrundschutzKompendium_node.html").read()
soup = BeautifulSoup(page, "html.parser")
for firstlink in soup.find_all("a",{"class":"RichTextIntLink NavNode"}):
firstlinks = firstlink.get("href")
if "bausteine" in firstlinks:
bausteinelinks = "https://www.bsi.bund.de/" + str(firstlinks.split(';')[0])
response = urllib.request.urlopen(bausteinelinks).read()
soup = BeautifulSoup(response, 'html.parser')
secondlink = "https://www.bsi.bund.de/" + str(((soup.find("a",{"class":"RichTextIntLink Basepage"})["href"]).split(';'))[0])
res = urllib.request.urlopen(secondlink).read()
soup = BeautifulSoup(res, 'html.parser')
listoftext = soup.find_all("div",{"id":"content"})
for text in listoftext:
print (text.text)
I want to scrape information from this page.
Specifically, I want to scrape the table which appears when you click "View all" under the "TOP 10 HOLDINGS" (you have to scroll down on the page a bit).
I am new to webscraping, and have tried using BeautifulSoup to do this. However, there seems to be an issue because the "onclick" function I need to take into account. In other words: The HTML code I scrape directly from the page doesn't include the table I want to obtain.
I am a bit confused about my next step: should I use something like selenium or can I deal with the issue in an easier/more efficient way?
Thanks.
My current code:
from bs4 import BeautifulSoup
import requests
Soup = BeautifulSoup
my_url = 'http://www.etf.com/SHE'
page = requests.get(my_url)
htmltxt = page.text
soup = Soup(htmltxt, "html.parser")
print(soup)
You can get a json response from the api: http://www.etf.com/view_all/holdings/SHE. The table you're looking for is located in 'view_all'.
import requests
from bs4 import BeautifulSoup as Soup
url = 'http://www.etf.com/SHE'
api = "http://www.etf.com/view_all/holdings/SHE"
headers = {'X-Requested-With':'XMLHttpRequest', 'Referer':url}
page = requests.get(api, headers=headers)
htmltxt = page.json()['view_all']
soup = Soup(htmltxt, "html.parser")
data = [[td.text for td in tr.find_all('td')] for tr in soup.find_all('tr')]
print('\n'.join(': '.join(row) for row in data))