I'm developing a web scraper to collect the src link from a source tag in an html file and add it to a list.
The site has a video nested under a load of divs, but all of the pages eventually come to:
<video type="video/mp4" poster="someimagelink" preload="metadata" crossorigin="anonymous">
<source type="video/mp4" src="somemp4link">
</video>
My current method is logging into the site, going to the page with the links to the video pages, going to each video page one by one and trying to find the source tag and adding it to the list.
import time
import requests
from bs4 import BeautifulSoup
from selenium import webdriver
browser = webdriver.Firefox()
# A bunch of log in and get list of video page links, which works fine
soup = BeautifulSoup(browser.page_source)
for i in range(3):
browser.get(soup('a', {'class', 'subject__item'})[i]['href'])
vsoup = BeautifulSoup(browser.page_source)
print(vsoup('source'))
browser.get('pageWithVideoPages')
# This doen't add to a list, it just goes to the video page,
# tries to find the source tag and print it out.
# Then go back to original page and start loop again.
What happens however is I get this:
[<source src="themp4link" type="video/mp4"></source>]
[]
[]
[]
So the first one works fine, then all the rest just return black lists...as if there was no source tag, but mannually checking the inspector reveals that there is a source tag there.
Repeating this, I now get:
[<source src="http://themp4link" type="video/mp4"></source>]
[]
[<source src="http://themp4link" type="video/mp4"></source>]
The site needed javascript enabled to load the content (which is why i'm using webdriver to do this)...could it be something to do with that?
Any help is much appreciated!
You probably need to wait for the web element you are looking for. You should explore using WebDriverWait.
Related
I am trying to scrape the link of a hi-res image from this link but the high-res version of the image can only be inspected upon clicking on the mid-sized link on the page, i.e after clicking "Click here to enlarge the image" (on the page, it's in Turkish).
Then I can inspect it with Chrome's "Developer Tools" and get the xpath/css selector. Everything is fine up to this point.
However, you know that in a JS page, you just can't type response.xpath("//blah/blah/#src") and get some data. I install Splash (with Docker pull) and configure my Scrapy setting.py files etc. to make it work (this YouTube link helped. no need to visit the link unless you wanna learn how to do it). ...and it worked on other JS webpages!
Just... I cannot pass this "Click here to enlarge the image!" thing and get the response. It gives me null response.
This is my code:
import scrapy
#import json
from scrapy_splash import SplashRequest
class TryMe(scrapy.Spider):
name = 'try_me'
allowed_domains = ['arabam.com']
def start_requests(self):
start_urls = ["https://www.arabam.com/ilan/sahibinden-satilik-hyundai-accent/bayramda-arabasiz-kalmaa/17753653",
]
for url in start_urls:
yield scrapy.Request(url=url,
callback=self.parse,
meta={'splash': {'endpoint': 'render.html', 'args': {'wait': 0.5}}})
# yield SplashRequest(url=url, callback=self.parse) # this works too
def parse(self, response):
## I can get this one's link successfully since it's not between js codes:
#IMG_LINKS = response.xpath('//*[#id="js-hook-for-ing-credit"]/div/div/a/img/#src').get()
## but this one just doesn't work:
IMG_LINKS = response.xpath("/html/body/div[7]/div/div[1]/div[1]/div/img/#src").get()
print(IMG_LINKS) # prints null :(
yield {"img_links":IMG_LINKS} # gives the items: img_links:null
Shell command which I'm using:
scrapy crawl try_me -O random_filename.jl
Xpath of the link I'm trying to scrape:
/html/body/div[7]/div/div[1]/div[1]/div/img
Image of this Xpath/link
I actually can see the link I want on the Network tab of my Developer Tools window when I click to enlarge it but I don't know how to scrape that link from that tab.
Possible Solution: I also will try to get the whole garbled body of my response, i.e response.text and apply a regular expression (e.g start with https://... and ends with .jpg) to it. This will definitely be looking for a needle in a haystack but it sounds quite practical as well.
Thanks!
As far as I understand you want to find the main image link. I checked out the page, it is inside the one of meta element:
<meta itemprop="image" content="https://arbstorage.mncdn.com/ilanfotograflari/2021/06/23/17753653/3c57b95d-9e76-42fd-b418-f81d85389529_image_for_silan_17753653_1920x1080.jpg">
Which you can get with
>>> response.css('meta[itemprop=image]::attr(content)').get()
'https://arbstorage.mncdn.com/ilanfotograflari/2021/06/23/17753653/3c57b95d-9e76-42fd-b418-f81d85389529_image_for_silan_17753653_1920x1080.jpg'
You don't need to use splash for this. If I check the website with splash, arabam.com gives permission denied error. I recommend not using splash for this website.
For a better solution for all images, You can parse the javascript. Images array loaded with js right here in the source.
To reach out that javascript try:
response.css('script::text').getall()[14]
This will give you the whole javascript string containing images array. You can parse it with built-in libraries like js2xml.
Check out how you can use it here https://github.com/scrapinghub/js2xml. If still have questions, you can ask. Good luck
My issue I'm having is that I want to grab the related links from this page: http://support.apple.com/kb/TS1538
If I Inspect Element in Chrome or Safari I can see the <div id="outer_related_articles"> and all the articles listed. If I attempt to grab it with BeautifulSoup it will grab the page and everything except the related articles.
Here's what I have so far:
import urllib2
from bs4 import BeautifulSoup
url = "http://support.apple.com/kb/TS1538"
response = urllib2.urlopen(url)
soup = BeautifulSoup(response.read())
print soup
This section is loaded using Javascript. Disable your browser's Javascript to see how BeautifulSoup "sees" the page.
From here you have two options:
Use a headless browser, that will execute the Javascript. See this questions about this: Headless Browser for Python (Javascript support REQUIRED!)
Try and figure out how the apple site loads the content and simulate it - it probably does an AJAX call to some address.
After some digging it seems it does a request to this address (http://km.support.apple.com/kb/index?page=kmdata&requestid=2&query=iOS%3A%20Device%20not%20recognized%20in%20iTunes%20for%20Windows&locale=en_US&src=support_site.related_articles.TS1538&excludeids=TS1538&callback=KmLoader.receiveSuccess) and uses JSONP to load the results with KmLoader.receiveSuccess being the name of the receiving function. Use Firebug of Chrome dev tools to inspect the page in more detail.
I ran into a similar problem, the html contents that are created dynamically may not be captured by BeautifulSoup. A very basic solution for this is to make it wait for few seconds before capturing the contents, or use Selenium instead that has the functionality to wait for an element and then proceed. So for the former, this worked for me:
import time
# .... your initial bs4 code here
time.sleep(5) #5 seconds, it worked with 1 second too
html_source = browser.page_source
# .... do whatever you want to do with bs4
I am trying to get stock options prices from this website based on the series code (for example FMM1), but the content is dynamically generated after the page loads and my python selenium script is not able to extract the correct source code, and therefore does not find it. When I inspect element, I can find it but not when I click on "view source code".
This is my code:
# Here, we open the website for options prices in Chrome
driver = webdriver.Chrome()
driver.get("http://www.bmfbovespa.com.br/pt_br/servicos/market-data/consultas/mercado-de-derivativos/precos-referenciais/precos-referenciais-bm-f-premios-de-opcoes/")
# Since the page is populated by JavaScript code *after* loading the page, we
# tell the browser to wait 10 seconds before getting the source html code
time.sleep(10)
html_file = driver.page_source # gets the html source of the page
print(html_file)
I have also tried the following, but it did not work:
WebDriverWait(driver, 60).until(EC.visibility_of_element_located((By.ID,
"divContainerIframeBmf")))
Use this after the page loads
driver.switch_to.frame(driver.find_element_by_xpath("//iframe"))
and continue performing your operations on the page.
I need to get the video link from a web page. I click on inspect element and go to Network tab, and I see a link I need to get... But how can I access this link trough python?
this is the situation:
http://i.imgur.com/DS811BW.jpg?1
the link is positioned in the header:
http://i.imgur.com/5C2vKje.jpg
I need only link, I don't need to download the video.
What would be the best path to go? Maybe Selenium?
Selenium will work, yes. What you'll want to do is find the element in the DOM that's pulling it in. Before you go that route though, you should try to figure out what element you're after manually. You're probably after a video tag and its child source tag.
HTML 5 video tag docs: http://www.w3schools.com/tags/tag_video.asp
Selenium selector docs: https://selenium-python.readthedocs.org/locating-elements.html
You just need to do a HTTP request to get the page and then go through the response to get the url. You need to define the XPath and use lxml to get the URL. Something like (it is just an example, probably will not work straight forward):
import lxml.html as parser
import requests
path = <define the XPATH>
url = <your url>
data = do_request(url)
if data:
doc = parser.fromstring(data)
url_res = doc.xpath(path) #the url from the webpage
#do_requests() example
def do_request(url):
r = requests.get(url)
return r.text if r.status_code == 200 else None
I am learning to use Python Selenium and BeautifulSoup for web scraping. Currently, I am trying to scrape the hot searches on Google search trends http://www.google.com/trends/hottrends#pn=p5
This is my current code. However, I realized the full html is not downloaded and I only have content from the most recent few dates. What can I do to rectify this problem?
from selenium import webdriver
from bs4 import BeautifulSoup
googleURL = "http://www.google.com/trends/hottrends#pn=p5"
browser = webdriver.Firefox()
browser.get(googleURL)
content = browser.page_source
soup = BeautifulSoup(content)
print soup
Users add more content to the page (from previous dates) by clicking the <div onclick="control.moreData()" id="moreLink">More...</div> element at the bottom of the page.
So to get your desired content, you could use Selenium to click the id="moreLink" element or execute some JavaScript to call control.moreData(); in a loop.
For example, if you want to get all content as far back as Friday, February 15, 2013 (it looks like a string of this format exists for every date, for loaded content) your python might look something like this:
content = browser.page_source
desired_content_is_loaded = false;
while (desired_content_is_loaded == false):
if not "Friday, February 15, 2013" in content:
sel.run_script("control.moreData();")
content = browser.page_source
else:
desired_content_is_loaded = true;
EDIT:
If you disable JavaScript in your browser and reload the page, you will see that there is no "trends" content at all. What that tells me, is that the those items are loaded dynamically. Meaning, they are not part of the HTML document which is downloaded when you open the page. Selenium's .get() waits for the HTML document to load, but not for all JS to complete. There's no telling if async JS will complete before or after any other event. It completes when it's ready, and could be different every time. That would explain why you might sometimes get all, some, or none of that content when you call browser.page_source because it depends how fast async JS happens to be working at that moment.
So, after opening the page, you might try waiting a few seconds before getting the source - giving the JS which loads the content time to complete.
browser.get(googleURL)
time.sleep(3)
content = browser.page_source