Selenium can't locate xpath of facebook pages - python

I am trying to download public images of some facebook pages using xpath. I got the xpath from google chrome dev mode (right click and copy xpath).
The xpath I got is: /html/body/div[1]/div/div[1]/div[1]/div[3]/div/div/div[1]/div[1]/div[4]/div/div/div[3]/div/div/div/div[2]
When I try to find it in gogole chrome, it finds the xapth just fine as shown in the image.
But Selenium throws an exception.
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element
The code snippet I am using is as follows-
driver.get(page)
sleep(10)
allimgdiv = driver.find_element_by_xpath(
'/html/body/div[1]/div/div[1]/div[1]/div[3]/div/div/div[1]/div[1]/div[4]/div/div/div[3]/div/div/div/div[2]')

This expression returns the link of only posted images in an facebook page type: https://www.facebook.com/username/photos
//h2[.//a[contains(#href,"/photos")]]//following::div//img
Try
images = driver.find_elements_by_xpath('//h2[.//a[contains(#href,"/photos")]]//following::div//img')
for image in images:
linkimage = image.find_element_by_xpath('./#src').text

Related

Can't get the propper Xpath to pass to Selenium with python

I'm stuck with a selenium scrape using jupyter. I'm trying to get the "download page data" from the bottom right corner of this page: https://polkadot.subscan.io/account?role=all&page=1
enter image description here
Also, here's the html code:
Download page data
I've tried copying the xpath and full xpath from the Google Chrome "inspect" tab, but it doesn't work.
Here's the code I used, but feel free to suggest anything else.
#Initiating Webdriver
s=Service('CHROMEDRIVER LOCATION')
op = webdriver.ChromeOptions()
driver = webdriver.Chrome(service=s, options=op)
link = "https://polkadot.subscan.io/account?role=all&page=1"
driver.get(link)
Ingresar = driver.find_element(By.XPATH,"//*[#id='app']/main/div/div/div[5]/div/div[3]/div[1]/div/div")
Here's the error I get:
ElementClickInterceptedException: Message: element click intercepted: Element <div data-v-24af4670="" class="label align-items-center">...</div> is not clickable at point (125, 721). Other element would receive the click: <div data-v-c018c6b4="" class="banner">...</div>
Either fixing my code, or getting a new one that works with jupyer and selenium
Try this code:
url = "https://polkadot.subscan.io/account?role=all&page=1"
driver.get(url)
driver.find_element(By.XPATH, ".//*[text()='I accept']").click()
time.sleep(5)
download_btn = driver.find_element(By.XPATH, ".//*[text()='Download page data']")
driver.execute_script("arguments[0].scrollIntoView(true);", download_btn)
download_btn.click()

How to click on <a> tag using selenium in python?

I am new to web scraping and I am trying to scrape reviews off amazon.
After going on a particular product's page on amazon I want to click on the 'see all reviews' button. I did inspect element on the page, I found that the see all reviews button has this structure
structure
So I tried to find this element using the class name a-link-emphasis a-text-bold.
This is the code I wrote
service = webdriver.chrome.service.Service('C:\\coding\\chromedriver.exe')
service.start()
options = webdriver.ChromeOptions()
#options.add_argument('--headless')
options = options.to_capabilities()
driver = webdriver.Remote(service.service_url, options)
driver.get(url)
sleep(5)
driver.find_element_by_class_name('a-link-emphasis a-text-bold').click()
sleep(5)
driver.implicitly_wait(10)
But this returns me the following error
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".a-link-emphasis a-text-bold"}
What am I doing wrong here?
driver.find_element_by_class_name('a-link-emphasis.a-text-bold').click()
By class expects single class not multiple but you can use the above syntax , remove space with . as it uses css under the hood, or use :
driver.find_element_by_css_selector('.a-link-emphasis.a-text-bold').click()
driver.find_element_by_css_selector('[class="a-link-emphasis a-text-bold"]').click()

Importing image to google forms via selenium driver in python

I am trying to import image to the google form. I am failing to pass keys to the element via xpath. It seems it is a hidden element.
I have tried to execute scripts to unhide it, but with no success.
Tried this solutions also:
How to access hidden file upload field with Selenium WebDriver python
Python-Selenium "input type file" upload
Does anybody have a method to import to google forms via drag and drop file dialog box.
I usually get an error:
NoSuchElementException: no such element: Unable to locate element:
I am sending some pseudocode:
from selenium import webdriver
browser = webdriver.Chrome()
browser.get('ANY GOOGLE FORM URL')
browser.find_element_by_xpath('//*[#id="mG61Hd"]/div/div[2]/div[2]/div[5]/div/div[3]/span/span').click()
fileinput = browser.find_element_by_xpath("//div[contains(#class,'Nf-Er-Xr')]")
fileinput.send_keys('some image path')
On clicking the ADD FILE button in Google Forms, it opens up a pop-up, which is an iframe. An iframe is basically an HTML page embedded into another one. In order to anything with it, you have to tell Selenium to switch the context to that iframe.
Check this SO question which explains how you'd go about switching context to an iframe. For your specific case, i.e. Google Forms, the code would be ―
iframe = driver.find_element_by_class_name('picker-frame')
driver.switch_to.frame(iframe)
input_field = driver.find_element_by_xpath('//input[#type="file"]')

NoSuchElementException on all elements on page with Python Selenium

Set-up
I'm trying to log in to a website using Python + Selenium.
My code to load the website is,
browser = webdriver.Firefox(
executable_path='/mypath/to/geckodriver')
url = 'https://secure6.e-boekhouden.nl/bh/'
browser.get(url)
Problem
Selenium cannot locate the element containing the account and password fields.
For example, for the field 'Gebruikersnaam',
browser.find_element_by_id('txtEmail')
browser.find_element_by_xpath('//*[#name="txtEmail"]')
browser.find_element_by_class_name('INPUTBOX')
all give NoSuchElementException: Unable to locate element.
Even worse, Selenium cannot find the body element on the page,
browser.find_element_by_xpath('/html/body')
gives NoSuchElementException: Unable to locate element: /html/body.
I'm guessing something on the page is either blocking Selenium (maybe the 'secure6' in the url) or is written in a language/form Selenium cannot handle.
Any suggestions?
All elements are inside the frame. So that, it is throwing No Such Element exception. Please try to switch to the frame before all actions as given below.
browser = webdriver.Firefox(
executable_path='/mypath/to/geckodriver')
url = 'https://secure6.e-boekhouden.nl/bh/'
browser.get(url)
browser.switch_to.frame(browser.find_element_by_id("mainframe"))
browser.find_element_by_id('txtEmail')
browser.find_element_by_xpath('//*[#name="txtEmail"]')
browser.find_element_by_class_name('INPUTBOX')

Scraping dynamic data with Selenium & Python 'Unable to locate element'

I'm trying to use Selenium & Python to scrape a website (http://epl.squawka.com/english-premier-league/06-03-2017/west-ham-vs-chelsea/matches). I am using the webdriver to click a heading and then wait for the new information to load before clicking on an object before scraping the resulting data (which loads from the clicking). My problem is that I keep on getting an 'Unable to locate element error.
I've taken a screenshot at this point and can physically see the element and I've also printed the entire source code and can see that the element is there.
driver.find_element_by_id("mc-stat-shot").click()
time.sleep(3)
driver.save_screenshot('test.png')
try:
element = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID,"svg")))
finally:
driver.find_element_by_xpath("//g[3]/circle").click()
time.sleep(1)
goalSource = driver.page_source
goalBsObj = BeautifulSoup(goalSource, "html.parser")
#print(goalBsObj)
print(goalBsObj.find(id="tt-mins").get_text())
print(goalBsObj.find(id="tt-event").get_text())
print(goalBsObj.find(id="tt-playerA").get_text())
and the result is an error:
"selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //g[3]/circle"

Categories