Python scraping links from buttons with event - python

Link I want to scrape: https://digital.auraria.edu/work/ns/8fb66c05-0ad2-4e56-8cc7-6ced34d0c126
I'm currently having some trouble scrapping the "Download" button on this website to download the pdf file using python and beautiful soup since normally, there's a link
and I can just do
soup = BeautifulSoup(r.content, 'lxml')
links = soup.find_all("a")
for link in links:
if ('pdf' in link.get('href')): #find if the book pdf link is in there.
i += 1
response = requests.get(link.get('href'))
print(f"Retrieving PDF for: {title}")
write_pdf(pdf_path, response.content)
However I'm not quite sure what the link for the pdf is in this. I'm wondering if I had to use a headless browser and how would I be able to extract this link?
Here is the Image of inspect element of the link below

The way I found the PDF link is by going to the page and looking at the page source. Then I used the finder tool and searched for PDF and found a meta tag.
<meta name="citation_pdf_url" content="https://dashboard.digital.auraria.edu/downloads/1e0b44c6-cd79-49a3-9eac-0b10d1a4392e"/>
I followed the link and it downloaded a PDF with the same title. In the following code below, you can get the entire tag or the contents using .attrs.get('content') at the end.
Required -> pip install bs4 requests
from bs4 import BeautifulSoup
import requests
url = "https://digital.auraria.edu/work/ns/8fb66c05-0ad2-4e56-8cc7-6ced34d0c126"
req = requests.get(url)
soup = BeautifulSoup(req.content, 'html.parser')
pdf_link = soup.find("meta", attrs={'name': "citation_pdf_url"}).attrs.get('content')
print(pdf_link)
Good luck and let me know if you face any other issues!

Just scrape the filename and add that name to this link, I got the link by actually downloading the file copying it's download address, removing file name and adding different one to test it, it works like a charm. https://storage.googleapis.com/us-fedora/fcrepo.binary.directory/ea/05/2a/ea052a597af18fb6a46c44254e9e596a7e93571f?GoogleAccessId=k8s-storage%40repositories-307112.iam.gserviceaccount.com&Expires=1643411246&Signature=ICo51cFbe3By7JPJol8nLfxcic%2BV%2Bv1uvGYjodATCXJc2I6XWSi7JWC8l%2BM6BTSVFOL8A0YioZOQggY8Afc0JJtiwInkxFHmVjleQ41he3RK5pwF4IwONeuQxcgUXYzd8p94sA5L0YZC6drAFb9mx4AJLwTdKQt7dZh146FmaQYY8ElGT6BpHX2t%2BK31UGP0pC75uFGUq6b3IDK11gPOCSvnrLGSAM1yulE8togDgZmw0BU77nLPkinXSIATCTjlHNxf5aUxlJkg0%2FtSM21b53JFvHGHHCQf8QSKtST4WCBA1up6BVX1YLbGLZXxQ07mf8K7jnQ4U%2FXfnw6IoTpQxw%3D%3D&response-content-disposition=attachment%3B%20filename%3D
so according to your example the link would look like this https://storage.googleapis.com/us-fedora/fcrepo.binary.directory/ea/05/2a/ea052a597af18fb6a46c44254e9e596a7e93571f?GoogleAccessId=k8s-storage%40repositories-307112.iam.gserviceaccount.com&Expires=1643411246&Signature=ICo51cFbe3By7JPJol8nLfxcic%2BV%2Bv1uvGYjodATCXJc2I6XWSi7JWC8l%2BM6BTSVFOL8A0YioZOQggY8Afc0JJtiwInkxFHmVjleQ41he3RK5pwF4IwONeuQxcgUXYzd8p94sA5L0YZC6drAFb9mx4AJLwTdKQt7dZh146FmaQYY8ElGT6BpHX2t%2BK31UGP0pC75uFGUq6b3IDK11gPOCSvnrLGSAM1yulE8togDgZmw0BU77nLPkinXSIATCTjlHNxf5aUxlJkg0%2FtSM21b53JFvHGHHCQf8QSKtST4WCBA1up6BVX1YLbGLZXxQ07mf8K7jnQ4U%2FXfnw6IoTpQxw%3D%3D&response-content-disposition=attachment%3B%20filename%3DIR00000195_00001.pdf

Related

How to copy the text from certain Website with python?

i want to copy the text from this Website (https://www.reclamgymnasium.de/mobil/plankl.html?Klasse=9.2), to use it later py script.
How can i do this? (It doesent realy work with request...)
If you google about python webscraping you will find a lot of information!
basically you start by executing
response = requests.get(url)
Which provides you with the html content of the webpage. Now you can use beautifulsoup to navigate through the content to get what you need.
First we need to create a soup:
soup = beautifulsoup(response.text, "lxml")
in which we can now find the content. If we for example want to find all the url's in the webpage, you can use:
soup.find_all('a')
Here is a complete example code for printing all the url's of a webpage:
import requests
from bs4 import BeautifulSoup
url = "https://google.com"
response = requests.get(url)
soup = BeautifulSoup(response.text, "lxml")
for link in soup.find_all('a'):
print(link)
Here is the documentation of beautifulsoup: https://www.crummy.com/software/BeautifulSoup/bs4/doc/
As the information that Johann was looking isn't static but dynamic information, I'm making a second answer to explain how I got the info.
When visiting the webpage https://www.reclamgymnasium.de/mobil/plankl.html?Klasse=9.2
Open the development tools of your browser (in my case it is firefox and I'm opening by pressing F12).
When the develompent tools are open, click on the "network" tab, which will be empty at this point.
Reload the page by clicking the reload arrow or by pressing F5.
Now we can see requests being loaded in the "network" tab.
As we are looking for data being loaded after page content, we look for "xml" or "json" responses in the "type" column.
Right click the response which has either correct type and click "open page in new tab"
If multiple responses match, test all matching until you find the information you are looking for.
In this case we found https://www.reclamgymnasium.de/mobil/mobdaten/PlanKl20210618.xml?_=1623933794858

Webscraping using BeautifulSoup (Jupyter Notebook)

Good afternoon,
I am fairly new to Webscraping. I am trying to scrape a dataset from an open source portal. Just to try to figure out how I can scrape website.
I am trying to scape a dataset from data.toerismevlaanderen.be
This is the dataset i want: https://data.toerismevlaanderen.be/tourist/reca/beer_bars
I always end up with a http error: HTTP Error 404: Not Found
This is my code:
import requests
import urllib.request
import time
from bs4 import BeautifulSoup
url = 'https://data.toerismevlaanderen.be/'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'html.parser')
soup.findAll('a')
one_a_tag = soup.findAll('a')[35]
link = one_a_tag['href']
download_url = 'https://data.toerismevlaanderen.be/'+ link
urllib.request.urlretrieve(download_url,'./'+link[link.find('/tourist/reca/beer_bars_')+1:])
time.sleep
What am I doing wrong?
The issue is the following:
link = one_a_tag['href']
print(link)
This returns a link: https://data.toerismevlaanderen.be/
Then you are adding this link to download_url by doing:
download_url = 'https://data.toerismevlaanderen.be/'+ link
Therefore, if you print(download_url), you get:
https://data.toerismevlaanderen.be/https://data.toerismevlaanderen.be/
Which it is not a valid url.
UPDATE BASED ON COMMENTS
The issue is that there is not tourist/activities/breweries anywhere in the text you scrape.
If you write:
for link in soup.findAll('a'):
print(link.get('href'))
you see all the a href tag. None contains tourist/activities/breweries
But
If you want just the link data.toerismevlaanderen.be/tourist/activities/breweries you can do:
download_url = link + "tourist/activities/breweries"
There is an API for this so I would use that
e.g.
import requests
r = requests.get('https://opendata.visitflanders.org/tourist/reca/beer_bars.json?page=1&page_size=500&limit=1').json()
you get many absolute links in return. Adding it to the original url for new requests therefor won't work. Simply requesting the 'link' you grabbed will work instead

Download all the files in the website

I need to download all the files under this links where only the suburb name keep changing in each link
Just a reference
https://www.data.vic.gov.au/data/dataset/2014-town-and-community-profile-for-thornbury-suburb
All the files under this search link:
https://www.data.vic.gov.au/data/dataset?q=2014+town+and+community+profile
Any possibilities?
Thanks :)
You can download file like this
import urllib2
response = urllib2.urlopen('http://www.example.com/file_to_download')
html = response.read()
To get all the links in a page
from bs4 import BeautifulSoup
import requests
r = requests.get("http://site-to.crawl")
data = r.text
soup = BeautifulSoup(data)
for link in soup.find_all('a'):
print(link.get('href'))
You should first read the html, parse it using Beautiful Soup and then find links according to the file type you want to download. For instance, if you want to download all pdf files, you can check if the links end with the .pdf extension or not.
There's a good explanation and code available here:
https://medium.com/#dementorwriter/notesdownloader-use-web-scraping-to-download-all-pdfs-with-python-511ea9f55e48

Scraping with Python. Can't get wanted data

I am trying to scrape website, but I encountered a problem. When I try to scrape data, it looks like the html differs from what I see on google inspect and from what I get from python. I get this with http://edition.cnn.com/election/results/states/arizona/house/01 I tried to scrape election results. I used this script to check HTML part of the webpage, and I noticed that they different. There is no classes that I need, like section-wrapper.
page =requests.get('http://edition.cnn.com/election/results/states/arizona/house/01')
soup = BeautifulSoup(page.content, "lxml")
print(soup)
Anyone knows what is the problem ?
http://data.cnn.com/ELECTION/2016/AZ/county/H_d1_county.json
This site use JavaScript fetch data, you can check the url above.
You can find this url in chrome dev-tools, there are many links, check it out
Chrome >>F12>> network tab>>F5(refresh page)>>double click the .josn url>> open new tab
import requests
from bs4 import BeautifulSoup
page=requests.get('http://edition.cnn.com/election/results/states/arizona/house/01')
soup = BeautifulSoup(page.content)
#you can try all sorts of tags here I used class: "ad" and class:"ec-placeholder"
g_data = soup.find_all("div", {"class":"ec-placeholder"})
h_data = soup.find_all("div"),{"class":"ad"}
for item in g_data:print item
#print '\n'
#for item in h_data:print item

How to find specific video html tag using beautiful soup?

Does anyone know how to use beautifulsoup in python.
I have this search engine with a list of different urls.
I want to get only the html tag containing a video embed url. and get the link.
example
import BeautifulSoup
html = '''https://archive.org/details/20070519_detroit2'''
#or this.. html = '''http://www.kumby.com/avatar-the-last-airbender-book-3-chapter-5/'''
#or this... html = '''https://www.youtube.com/watch?v=fI3zBtE_S_k'''
soup = BeautifulSoup.BeautifulSoup(html)
what should I do next . to get the html tag of video, or object or the exact link of the video..
I need it to put it on my iframe. i will integrate the python to my php. so getting the link of the video and outputting it using the python then i will echo it on my iframe.
You need to get the html of the page not just the url
use the built-in lib urllib like this:
import urllib
from bs4 import BeautifulSoup as BS
url = '''https://archive.org/details/20070519_detroit2'''
#open and read page
page = urllib.urlopen(url)
html = page.read()
#create BeautifulSoup parse-able "soup"
soup = BS(html)
#get the src attribute from the video tag
video = soup.find("video").get("src")
also with the site you are using i noticed that to get the embed link just change details in the link to embed so it looks like this:
https://archive.org/embed/20070519_detroit2
so if you want to do it to multiple urls without having to parse just do something like this:
url = '''https://archive.org/details/20070519_detroit2'''
spl = url.split('/')
spl[3] = 'embed'
embed = "/".join(spl)
print embed
EDIT
to get the embed link for the other links you provided in your edit you need to look through the html of the page you are parsing, look until you fint the link then get the tag its in then the attribute
for
'''http://www.kumby.com/avatar-the-last-airbender-book-3-chapter-5/'''
just do
soup.find("iframe").get("src")
the iframe becuase the link is in the iframe tag and the .get("src") because the link is the src attribute
You can try the next one because you should learn how to do it if you want to be able to do it in the future :)
Good luck!
You can't parse a URL. BeautifulSoup is used to parse an html page. Retrieve the page first:
import urllib2
data = urllib2.ulropen("https://archive.org/details/20070519_detroit2")
html = data.read()
Then you can use find, and then take the src attribute:
soup = BeautifulSoup(html)
video = soup.find('video')
src = video['src']
this is a one liner to get all the downloadable MP4 file in that page, in case you need it.
import bs4, urllib2
url = 'https://archive.org/details/20070519_detroit2'
soup = bs4.BeautifulSoup(urllib2.urlopen(url))
links = [a['href'] for a in soup.find_all(lambda tag: tag.name == "a" and '.mp4' in tag['href'])]
print links
Here are the output:
['/download/20070519_detroit2/20070519_detroit_jungleearth.mp4',
'/download/20070519_detroit2/20070519_detroit_sweetkissofdeath.mp4',
'/download/20070519_detroit2/20070519_detroit_goodman.mp4',
...
'/download/20070519_detroit2/20070519_detroit_wilson_512kb.mp4']
These are relative links and you and put them together with the domain and you get absolute path.

Categories