Downloading target link html in a text file (Beautiful Soup - Python3) - python

I am completely new to python and studying Web crawling.
I am trying to download individual target link in a text page.
So far, I succeeded to extract all the target URLs I need, but have no idea on how to download all target HTML texts in a text file.
Can someone give me a general idea.
url = ""
r = requests.get(url)
data = r.text
soup = BeautifulSoup(data, "lxml")
link1 = soup2.find_all('a', href=re.compile("drupal_lists"))
for t in link1:
print(t.attrs['href'])

Within your for loop access the link urls using the requests lib and write the contents to a file. Something like:
link_data = requests.get(t.attrs['href']).text
with open('file_to_write.out', 'w') as f:
f.write(link_data)
You may want to change the filename for each link.

Related

Python scraping links from buttons with event

Link I want to scrape: https://digital.auraria.edu/work/ns/8fb66c05-0ad2-4e56-8cc7-6ced34d0c126
I'm currently having some trouble scrapping the "Download" button on this website to download the pdf file using python and beautiful soup since normally, there's a link
and I can just do
soup = BeautifulSoup(r.content, 'lxml')
links = soup.find_all("a")
for link in links:
if ('pdf' in link.get('href')): #find if the book pdf link is in there.
i += 1
response = requests.get(link.get('href'))
print(f"Retrieving PDF for: {title}")
write_pdf(pdf_path, response.content)
However I'm not quite sure what the link for the pdf is in this. I'm wondering if I had to use a headless browser and how would I be able to extract this link?
Here is the Image of inspect element of the link below
The way I found the PDF link is by going to the page and looking at the page source. Then I used the finder tool and searched for PDF and found a meta tag.
<meta name="citation_pdf_url" content="https://dashboard.digital.auraria.edu/downloads/1e0b44c6-cd79-49a3-9eac-0b10d1a4392e"/>
I followed the link and it downloaded a PDF with the same title. In the following code below, you can get the entire tag or the contents using .attrs.get('content') at the end.
Required -> pip install bs4 requests
from bs4 import BeautifulSoup
import requests
url = "https://digital.auraria.edu/work/ns/8fb66c05-0ad2-4e56-8cc7-6ced34d0c126"
req = requests.get(url)
soup = BeautifulSoup(req.content, 'html.parser')
pdf_link = soup.find("meta", attrs={'name': "citation_pdf_url"}).attrs.get('content')
print(pdf_link)
Good luck and let me know if you face any other issues!
Just scrape the filename and add that name to this link, I got the link by actually downloading the file copying it's download address, removing file name and adding different one to test it, it works like a charm. https://storage.googleapis.com/us-fedora/fcrepo.binary.directory/ea/05/2a/ea052a597af18fb6a46c44254e9e596a7e93571f?GoogleAccessId=k8s-storage%40repositories-307112.iam.gserviceaccount.com&Expires=1643411246&Signature=ICo51cFbe3By7JPJol8nLfxcic%2BV%2Bv1uvGYjodATCXJc2I6XWSi7JWC8l%2BM6BTSVFOL8A0YioZOQggY8Afc0JJtiwInkxFHmVjleQ41he3RK5pwF4IwONeuQxcgUXYzd8p94sA5L0YZC6drAFb9mx4AJLwTdKQt7dZh146FmaQYY8ElGT6BpHX2t%2BK31UGP0pC75uFGUq6b3IDK11gPOCSvnrLGSAM1yulE8togDgZmw0BU77nLPkinXSIATCTjlHNxf5aUxlJkg0%2FtSM21b53JFvHGHHCQf8QSKtST4WCBA1up6BVX1YLbGLZXxQ07mf8K7jnQ4U%2FXfnw6IoTpQxw%3D%3D&response-content-disposition=attachment%3B%20filename%3D
so according to your example the link would look like this https://storage.googleapis.com/us-fedora/fcrepo.binary.directory/ea/05/2a/ea052a597af18fb6a46c44254e9e596a7e93571f?GoogleAccessId=k8s-storage%40repositories-307112.iam.gserviceaccount.com&Expires=1643411246&Signature=ICo51cFbe3By7JPJol8nLfxcic%2BV%2Bv1uvGYjodATCXJc2I6XWSi7JWC8l%2BM6BTSVFOL8A0YioZOQggY8Afc0JJtiwInkxFHmVjleQ41he3RK5pwF4IwONeuQxcgUXYzd8p94sA5L0YZC6drAFb9mx4AJLwTdKQt7dZh146FmaQYY8ElGT6BpHX2t%2BK31UGP0pC75uFGUq6b3IDK11gPOCSvnrLGSAM1yulE8togDgZmw0BU77nLPkinXSIATCTjlHNxf5aUxlJkg0%2FtSM21b53JFvHGHHCQf8QSKtST4WCBA1up6BVX1YLbGLZXxQ07mf8K7jnQ4U%2FXfnw6IoTpQxw%3D%3D&response-content-disposition=attachment%3B%20filename%3DIR00000195_00001.pdf

Download all the files in the website

I need to download all the files under this links where only the suburb name keep changing in each link
Just a reference
https://www.data.vic.gov.au/data/dataset/2014-town-and-community-profile-for-thornbury-suburb
All the files under this search link:
https://www.data.vic.gov.au/data/dataset?q=2014+town+and+community+profile
Any possibilities?
Thanks :)
You can download file like this
import urllib2
response = urllib2.urlopen('http://www.example.com/file_to_download')
html = response.read()
To get all the links in a page
from bs4 import BeautifulSoup
import requests
r = requests.get("http://site-to.crawl")
data = r.text
soup = BeautifulSoup(data)
for link in soup.find_all('a'):
print(link.get('href'))
You should first read the html, parse it using Beautiful Soup and then find links according to the file type you want to download. For instance, if you want to download all pdf files, you can check if the links end with the .pdf extension or not.
There's a good explanation and code available here:
https://medium.com/#dementorwriter/notesdownloader-use-web-scraping-to-download-all-pdfs-with-python-511ea9f55e48

How can I reach seq tag data via web scraping with BeautifulSoup?

I am a newbie to web scraping. I am trying to get FASTA file from here, but somehow I cannot. First of all the problem starting for me span tag, I tried some couple of suggestions but not working for me I am suspecting that maybe there is a privacy problem
The FASTA file in this class, but when I run this code, I just can see FASTA title:
url = "https://www.ncbi.nlm.nih.gov/nuccore/193211599?report=fasta"
res = requests.get(url)
soup = BeautifulSoup(res.text, "html.parser")
fasta_data = soup.find_all("div")
for link in soup.find_all("div", {"class": "seqrprt seqviewer"}):
print link.text
url = "https://www.ncbi.nlm.nih.gov/nuccore/193211599?report=fasta"
res = requests.get(url)
soup = BeautifulSoup(res.text, "html.parser")
fasta_data = soup.find_all("div")
for link in soup.find_all("div", {"class": "seqrprt seqviewer"}):
print link.text
##When I try to reach directly via span, output is empty.
div = soup.find("div", {'id':'viewercontent1'})
spans = div.find_all('span')
for span in spans:
print span.string
Every scraping job involves two phases:
Understand the page that you want to scrape. (How it works? content loaded from Ajax? redirections? POST? GET? iframes? antiscraping stuff?...)
Emulate the webpage using your favourite framework
Do not write a single line of code before to work on point 1. Google network inspector is your friend, use it!
Regarding your webpage, it seems that the report is loaded into a viewer getting data from this url:
https://www.ncbi.nlm.nih.gov/sviewer/viewer.fcgi?id=193211599&db=nuccore&report=fasta&extrafeat=0&fmt_mask=0&retmode=html&withmarkup=on&tool=portal&log$=seqview&maxdownloadsize=1000000
Use that url and you will get your report.

Downloading pdfs with python?

I'm writting a script that uses regex to find pdf links on a page then download said links. The script runs and names the files properly in my personal directory however it is not downloading the full pdf file. The pdfs are being pulled and are only 19kb, a corrupted pdf, when they should be approxemtely 15mb
import urllib, urllib2, re
url = 'http://www.website.com/Products'
destination = 'C:/Users/working/'
website = urllib2.urlopen(url)
html = website.read()
links = re.findall('.PDF">.*_geo.PDF', html)
for item in links:
DL = item[6:]
DL_PATH = url + '/' + DL
SV_PATH = destination + DL
urllib.urlretrieve(DL_PATH, SV_PATH)
The url variable links to a page with links to all the pdfs. When you click on the pdf link it takes you to 'www.website.com/Products/NorthCarolina.pdf' which displays the pdf in the browser. I'm not sure if because of this i should be using a diffrent python method or module
You could try something like this:
import requests
links = ['link.pdf']
for link in links:
book_name = link.split('/')[-1]
with open(book_name, 'wb') as book:
a = requests.get(link, stream=True)
for block in a.iter_content(512):
if not block:
break
book.write(block)
You can also use HTML knowledge (for parsing) and the BeautifulSoup library to find all pdf files from a webpage and then download them all together.
html = urlopen(my_url).read()
html_page = bs(html, features=”lxml”)
After parsing, you can search for <a> tags since all hyperlinks have these tags. Once you have all the <a> tags, you can further narrow them down by checking if they end with the pdf extension or not. Here's a full explanation for it: https://medium.com/the-innovation/notesdownloader-use-web-scraping-to-download-all-pdfs-with-python-511ea9f55e48

live link checking with Python can't get it to run from file

I have written a script that checks to see is a link is live on a website, in this case 'twitter.com'
I can appreciate the way I have done this is probably not the best but I am pretty new to Python and programming in general.
Anyway I am trying to run this from a file of links so the raw input of one URL would be done away with and I would be running multiple URL checks from a file to see if they contained 'twitter.com'
Here is my code, working but using raw_input()
from bs4 import BeautifulSoup
import requests
link_list = []
status = ' Live!!'
domain = 'twitter.com'
url = raw_input("Enter a website to extract the URL's from: ")
r = requests.get('http://www.' +url)
data = r.text
soup = BeautifulSoup(data)
for link in soup.find_all('a'):
links = (link.get('href'))
link_list.append(links)
if domain in ', '.join(link_list):
print url +status
Just to clarify I have a file of URLS, line by line and I'd like to check if they contain 'twitter.com'
I have tried many ways but it just won't work!!
Any help is much appreciated.
If you want to open a file and read the lines into an array, it's easy:
with open(filename) as f:
urls = f.readlines()
After that, urls will be a list of the names.
Then you can iterate over this list:
for url in urls:
link_list = []
r = requests.get('http://www.' +url)
data = r.text
soup = BeautifulSoup(data)
for link in soup.find_all('a'):
links = (link.get('href'))
link_list.append(links)
if domain in ', '.join(link_list):
print url +status

Categories