I am trying to download some images from NHTSA Crash Viewer (CIREN cases). An example of the case https://crashviewer.nhtsa.dot.gov/nass-CIREN/CaseForm.aspx?xsl=main.xsl&CaseID=99817
If I try to download a Front crash image then there is no file downloaded. I am using beautifulsoup4 and requests libraries. This code works for other websites.
The link of images are in the following format: https://crashviewer.nhtsa.dot.gov/nass-CIREN/GetBinary.aspx?Image&ImageID=555004572&CaseID=555003071&Version=0
I have also tried the previous answers from SO but none solution works, Error obtained:
No response form server
Code used for web scraping
from bs4 import *
import requests as rq
import os
r2 = rq.get("https://crashviewer.nhtsa.dot.gov/nass-CIREN/GetBinary.aspx?Image&ImageID=555004572&CaseID=555003071&Version=0")
soup2 = BeautifulSoup(r2.text, "html.parser")
links = []
x = soup2.select('img[src^="https://crashviewer.nhtsa.dot.gov"]')
for img in x:
links.append(img['src'])
os.mkdir('ciren_photos')
i=1
for index, img_link in enumerate(links):
if i<=200:
img_data = rq.get(img_link).content
with open("ciren_photos\\"+str(index+1)+'.jpg', 'wb+') as f:
f.write(img_data)
i += 1
else:
f.close()
break
This is a task that would require Selenium, but luckily there is a shortcut. On the top of the page there is a "Text and Images Only" link that goes to a page like this one: https://crashviewer.nhtsa.dot.gov/nass-CIREN/CaseForm.aspx?ViewText&CaseID=99817&xsl=textonly.xsl&websrc=true that contains all the images and text content in one page. You can select that link with soup.find('a', text='Text and Images Only').
That link and the image links are relative (links to the same site are usually relative links), so you'll have to use urljoin() to get the full urls.
from bs4 import BeautifulSoup
import requests as rq
from urllib.parse import urljoin
url = 'https://crashviewer.nhtsa.dot.gov/nass-CIREN/CaseForm.aspx?xsl=main.xsl&CaseID=99817'
with rq.session() as s:
r = s.get(url)
soup = BeautifulSoup(r.text, "html.parser")
url = urljoin(url, soup.find('a', text='Text and Images Only')['href'])
r = s.get(url)
soup = BeautifulSoup(r.text, "html.parser")
links = [urljoin(url, i['src']) for i in soup.select('img[src^="GetBinary.aspx"]')]
for link in links:
content = s.get(link).content
# write `content` to file
So, the site doesn't return valid pictures unless the request has valid cookies. There are two ways to get the cookies: either use cookies from a previous request or use a Sessiion object. It's best to use a Session because it also handles the TCP connection and other parameters.
Related
I'm trying to scrape images from a site using beautifulsoup HTML parser.
There are 2 kinds of image tags for each image on the site. One is for the thumbnail and the other is the bigger size image that only appears after I click on the thumbnail and expand. The bigger size tag contains a class="expanded-image" attribute.
I'm trying to parse through the HTML and get the "src" attribute of the expanded image which contains the source for the image.
When I try to execute my code, nothing happens. It just says the process finished without scraping any image. But when I don't try to filter the code and just give tag as an argument, it downloads all the thumbnails.
Here's my code:
import webbrowser, requests, os
from bs4 import BeautifulSoup
def getdata(url):
r = requests.get(url)
return r.text
htmldata = getdata('https://boards.4chan.org/a/thread/30814')
soup = BeautifulSoup(htmldata, 'html.parser')
list = []
for i in soup.find_all("img",{"class":"expanded-thumb"}):
list.append(i['src'].replace("//","https://"))
def download(url, pathname):
if not os.path.isdir(pathname):
os.makedirs(pathname)
filename = os.path.join(pathname, url.split("/")[-1])
response = requests.get(url, stream=True)
with open(filename, "wb") as f:
f.write(response.content)
for a in list:
download(a,"file")
You might be running into a problem using "list" as a variable name. It's a type in python. Start with this (replacing TEST_4CHAN_URL with whatever thread you want), incorporating my suggestion from the comment above.
import requests
from bs4 import BeautifulSoup
TEST_4CHAN_URL = "https://boards.4chan.org/a/thread/<INSERT_THREAD_ID_HERE>"
def getdata(url):
r = requests.get(url)
return r.text
htmldata = getdata(TEST_4CHAN_URL)
soup = BeautifulSoup(htmldata, "html.parser")
src_list = []
for i in soup.find_all("a", {"class":"fileThumb"}):
src_list.append(i['href'].replace("//", "https://"))
print(src_list)
I am trying to find a downloadable video links in a website. For example, I am working with urls like these https://www.loc.gov/item/2015669100/. You can see that there is a m3u8 video link under mejs__mediaelement div tag.
However my code is not printing anything. Meaning that it's not finding the Video urls for the website.
My code is below
from bs4 import BeautifulSoup
from urllib.request import Request, urlopen
with open('pages2crawl.txt', 'r') as inFile:
lines = [line.rstrip() for line in inFile]
for page in lines:
req = Request(page, headers={'User-Agent': 'Mozilla/5.0'})
soup = BeautifulSoup(urlopen(req).read(), 'html.parser')
pages = soup.findAll('div', attrs={'class' : 'mejs__mediaelement'})
for e in pages:
video = e.find("video").get("src")
if video.endswith("m3u8"):
print(video)
If you just want to make a simple script it would probably be easier to use regex.
import re, requests
s = requests.Session() #start the session
data = s.get(url) #http get request to download data
data = data.text #get the raw text
vidlinks = re.findall("src='(.*?).m3u8'/>", data) #find all between the two parts in the data
print(vidlinks[0] + ".m3u8") #print the full link with extension
You can use CSS selector source[type="application/x-mpegURL"] to extract MPEG link (or source[type="video/mp4"] to extract mp4 link):
import requests
from bs4 import BeautifulSoup
url = "https://www.loc.gov/item/2015669100/"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
link_mpeg = soup.select_one('source[type="application/x-mpegURL"]')["src"]
link_mp4 = soup.select_one('source[type="video/mp4"]')["src"]
print(link_mpeg)
print(link_mp4)
Prints:
https://tile.loc.gov/streaming-services/iiif/service:afc:afc2010039:afc2010039_crhp0001:afc2010039_crhp0001_mv04/full/full/0/full/default.m3u8
https://tile.loc.gov/storage-services/service/afc/afc2010039/afc2010039_crhp0001/afc2010039_crhp0001_mv04.mp4
I am trying to create a bot that scrapes all the image links from a site and store them somewhere else so I can download the images after.
from selenium import webdriver
import time
from bs4 import BeautifulSoup as bs
import requests
url = 'https://www.artstation.com/artwork?sorting=trending'
page = requests.get(url)
driver = webdriver.Chrome()
driver.get(url)
time.sleep(3)
soup = bs(driver.page_source, 'html.parser')
gallery = soup.find_all(class_="image-src")
data = gallery[0]
for x in range(len(gallery)):
print("TAG:", sep="\n")
print(gallery[x], sep="\n")
if page.status_code == 200:
print("Request OK")
This returns all the links tags i wanted but I can't find a way to remove the html or copy only the links to a new list. Here is an example of the tag i get:
<div class="image-src" image-src="https://cdnb.artstation.com/p/assets/images/images/012/269/255/20180810092820/smaller_square/vince-rizzi-batman-n52-p1-a.jpg?1533911301" ng-if="::!project.hide_as_adult"></div>
So, how do i get only the links within the gallery[] list?
What i want to do after is to take this links and edit the /smaller-square/ directory to /large/, which is the one that has the high resolution image.
The page loads it's data through AJAX, so through network inspector we see, where the call is made. This snippet will obtain all the image links found on page 1, sorted by trending:
import requests
import json
url = 'https://www.artstation.com/projects.json?page=1&sorting=trending'
page = requests.get(url)
json_data = json.loads(page.text)
for data in json_data['data']:
print(data['cover']['medium_image_url'])
Prints:
https://cdna.artstation.com/p/assets/images/images/012/272/796/medium/ben-zhang-brigitte-hero-concept.jpg?1533921480
https://cdna.artstation.com/p/assets/covers/images/012/279/572/medium/ham-sung-choul-braveking-140823-1-3-s3-mini.jpg?1533959982
https://cdnb.artstation.com/p/assets/covers/images/012/275/963/medium/michael-vicente-orb-gem-thumb.jpg?1533933774
https://cdnb.artstation.com/p/assets/images/images/012/275/635/medium/michael-kutsche-piglet-by-michael-kutsche.jpg?1533932387
https://cdna.artstation.com/p/assets/images/images/012/273/384/medium/ben-zhang-unnamed.jpg?1533923353
https://cdnb.artstation.com/p/assets/covers/images/012/273/083/medium/michael-vicente-orb-guardian-thumb.jpg?1533922229
... and so on.
If you print the variable json_data, you will see other information the page sends (like icon image url, total_count, data about the author etc.)
You can access the attributes using key-value.
Ex:
from bs4 import BeautifulSoup
s = '''<div class="image-src" image-src="https://cdnb.artstation.com/p/assets/images/images/012/269/255/20180810092820/smaller_square/vince-rizzi-batman-n52-p1-a.jpg?1533911301" ng-if="::!project.hide_as_adult"></div>'''
soup = BeautifulSoup(s, "html.parser")
print(soup.find("div", class_="image-src")["image-src"])
#or
print(soup.find("div", class_="image-src").attrs['image-src'])
Output:
https://cdnb.artstation.com/p/assets/images/images/012/269/255/20180810092820/smaller_square/vince-rizzi-batman-n52-p1-a.jpg?1533911301
I’m trying to scrape all the file paths from links like this: https://github.com/themichaelusa/Trinitum/find/master, without using the GitHub API at all.
The link above contains a data-url attribute in the HTML (table, id=‘tree-finder-results’, class=‘tree-browser css-truncate’), which is used to make a URL like this: https://github.com/themichaelusa/Trinitum/tree-list/45a2ca7145369bee6c31a54c30fca8d3f0aae6cd
which displays this dictionary:
{"paths":["Examples/advanced_example.py","Examples/basic_example.py","LICENSE","README.md","Trinitum/AsyncManager.py","Trinitum/Constants.py","Trinitum/DatabaseManager.py","Trinitum/Diagnostics.py","Trinitum/Order.py","Trinitum/Pipeline.py","Trinitum/Position.py","Trinitum/RSU.py","Trinitum/Strategy.py","Trinitum/TradingInstance.py","Trinitum/Trinitum.py","Trinitum/Utilities.py","Trinitum/__init__.py","setup.cfg","setup.py"]}
when you view it in a browser like Chrome. However, GET request yields a <[400] Response>.
Here is the code I used:
username, repo = ‘themichaelusa’, ‘Trinitum’
ghURL = 'https://github.com'
url = ghURL + ('/{}/{}/find/master'.format(self.username, repo))
html = requests.get(url)
soup = BeautifulSoup(html.text, "lxml")
repoContent = soup.find('div', class_='tree-finder clearfix')
fileLinksURL = ghURL + str(repoContent.find('table').attrs['data-url'])
filePaths = requests.get(fileLinksURL)
print(filePaths)
Not sure what is wrong with it. My theory is that the first link creates a cookie that allows the second link to show the file paths of the repo we are targeting. I'm just unsure how to achieve this via code. Would really appreciate some pointers!
Give it a go. The links containing .py files are generated dynamically, so to catch them you need to use selenium. I think this is what you expected.
from selenium import webdriver ; from bs4 import BeautifulSoup
from urllib.parse import urljoin
url = 'https://github.com/themichaelusa/Trinitum/find/master'
driver=webdriver.Chrome()
driver.get(url)
soup = BeautifulSoup(driver.page_source, "lxml")
driver.quit()
for link in soup.select('#tree-finder-results .js-tree-finder-path'):
print(urljoin(url,link['href']))
Partial results:
https://github.com/themichaelusa/Trinitum/blob/master
https://github.com/themichaelusa/Trinitum/blob/master/Examples/advanced_example.py
https://github.com/themichaelusa/Trinitum/blob/master/Examples/basic_example.py
https://github.com/themichaelusa/Trinitum/blob/master/LICENSE
https://github.com/themichaelusa/Trinitum/blob/master/README.md
https://github.com/themichaelusa/Trinitum/blob/master/Trinitum/AsyncManager.py
I am using Python 3.5 with beautifulsoup (bs4) and urllib. The code I will append returns all the links for ONE page.
How do I loop this so it runs across all pages in the website, using the links found on each page to dictate which pages are to be scraped next. As I don't know how many hops I need to go.
I have tried looping it of course, but it never stops as pages contain links to pages I have already scanned. I have tried creating sets of the links I have scanned putting in IF not in set ... but again it just runs forever.
import bs4
import re
import urllib.request
website = 'http://elderscrolls.wikia.com/wiki/Skyrim'
req = urllib.request.Request(website)
with urllib.request.urlopen(req) as response:
the_page = response.read()#store web page html
dSite = bs4.BeautifulSoup(the_page, "html.parser")
links = []
for link in dSite.find_all('a'):#grab all links on page
links.append(link.get('href'))
siteOnly = re.split('/', website)
validLinks = set()
for item in links:
if re.search('^/' +siteOnly[3] + '/', str(item)):#filter links to local website
newLink = 'http://' + str(siteOnly[2]) + str(item)
validLinks.add(newLink)
print(validLinks)
import bs4, requests
from urllib.parse import urljoin
base_url = 'http://elderscrolls.wikia.com/wiki/Skyrim'
response = requests.get(base_url)
soup = bs4.BeautifulSoup(response.text, 'lxml')
local_a_tags = soup.select('a[href^="/wiki/"]')
links = [a['href']for a in local_a_tags]
full_links = [urljoin(base_url, link) for link in links]
print (full_links)
out:
http://elderscrolls.wikia.com/wiki/The_Elder_Scrolls_Wiki
http://elderscrolls.wikia.com/wiki/Portal:Online
http://elderscrolls.wikia.com/wiki/Quests_(Online)
http://elderscrolls.wikia.com/wiki/Main_Quest_(Online)
http://elderscrolls.wikia.com/wiki/Aldmeri_Dominion_Quests
http://elderscrolls.wikia.com/wiki/Daggerfall_Covenant_Quests
http://elderscrolls.wikia.com/wiki/Ebonheart_Pact_Quests
http://elderscrolls.wikia.com/wiki/Category:Online:_Side_Quests
http://elderscrolls.wikia.com/wiki/Factions_(Online)
http://elderscrolls.wikia.com/wiki/Aldmeri_Dominion_(Online)
http://elderscrolls.wikia.com/wiki/Daggerfall_Covenant
http://elderscrolls.wikia.com/wiki/Ebonheart_Pact
http://elderscrolls.wikia.com/wiki/Classes_(Online)
http://elderscrolls.wikia.com/wiki/Dragonknight
http://elderscrolls.wikia.com/wiki/Sorcerer_(Online)
http://elderscrolls.wikia.com/wiki/Nightblade_(Online)
http://elderscrolls.wikia.com/wiki/Templar
http://elderscrolls.wikia.com/wiki/Races_(Online)
http://elderscrolls.wikia.com/wiki/Altmer_(Online)
http://elderscrolls.wikia.com/wiki/Argonian_(Online)
http://elderscrolls.wikia.com/wiki/Bosmer_(Online)
http://elderscrolls.wikia.com/wiki/Breton_(Online)
http://elderscrolls.wikia.com/wiki/Dunmer_(Online)
http://elderscrolls.wikia.com/wiki/Imperial_(Online)
http://elderscrolls.wikia.com/wiki/Khajiit_(Online)
http://elderscrolls.wikia.com/wiki/Nord_(Online)
http://elderscrolls.wikia.com/wiki/Orsimer_(Online)
http://elderscrolls.wikia.com/wiki/Redguard_(Online)
http://elderscrolls.wikia.com/wiki/Locations_(Online)
http://elderscrolls.wikia.com/wiki/Regions_(Online)
http://elderscrolls.wikia.com/wiki/Category:Online:_Realms
http://elderscrolls.wikia.com/wiki/Category:Online:_Cities
http://elderscrolls.wikia.com/wiki/Category:Online:_Dungeons
http://elderscrolls.wikia.com/wiki/Category:Online:_Dark_Anchors
http://elderscrolls.wikia.com/wiki/Wayshrines_(Online)
http://elderscrolls.wikia.com/wiki/Category:Online:_Unmarked_Locations
http://elderscrolls.wikia.com/wiki/Combat_(Online)
http://elderscrolls.wikia.com/wiki/Skills_(Online)
http://elderscrolls.wikia.com/wiki/Ultimate_Skills
http://elderscrolls.wikia.com/wiki/Synergy
http://elderscrolls.wikia.com/wiki/Finesse
http://elderscrolls.wikia.com/wiki/Add-ons
First, use requests instead of urllib.
Than, use Beautifulsoup CSS selector can to filter the href base on the start, you can refer the document to learn more.
Finally, use urljoin to convert relative url to absoulte url
>>> from urllib.parse import urljoin
>>> urljoin('http://www.cwi.nl/%7Eguido/Python.html', 'FAQ.html')
'http://www.cwi.nl/%7Eguido/FAQ.html'