Do not find the element in the webpage - python

Can not find the element
The code was writen using python with visual studio code
from time import time
import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
driver = webdriver.Chrome()
paginaHit = 'https://hit.com.do/solicitud-de-verificacion/'
driver.get(paginaHit)
driver.maximize_window()
time.sleep(5)
bl = 'SMLU7318830A'
elementoBL = driver.find_element(By.XPATH, '//*[#id="billoflanding"]').send_keys(bl)
# WebDriverWait(driver,2).until(EC.element_to_be_clickable((By.NAME, "bl"))).Click()
The code is OK, but can not find the element in the webpage.

The portion of the page you are trying to access is inside an EMBED tag. It looks similar to an IFRAME so I would start by switching the context to the EMBED tag and then try searching for the element.
driver = webdriver.Chrome()
paginaHit = 'https://hit.com.do/solicitud-de-verificacion/'
driver.get(paginaHit)
driver.maximize_window()
embed = driver.find_element(By.CSS_SELECTOR, "embed")
driver.switch_to.frame(embed)
bl = 'SMLU7318830A'
wait =WebDriverWait(driver, 20)
wait.until(EC.visibility_of_element_located((By.ID, "billoflanding"))).send_keys(bl)
Couple of additional points:
Don't use sleeps... sleeps are a bad practice. Instead use WebDriverWait when you need to wait for something to happen.
If you are using an ID to find an element, use By.ID and not XPath. ID should be preferred, when available. Next should be a CSS selector and then finally, XPATH only when needed, e.g. to locate elements by contained text or to do complicated DOM traversal.

Related

Use Python Selenium to extract span text

Hi i'm new at selenium and webscraping and i need some help.
i try to scrape one site and i need and i dont know how to get span class.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
PATCH = "/Users/bobo/Downloads/chromedriver"
driver = webdriver.Chrome(PATCH)
driver.get("https://neonet.pl")
print(driver.title)
search = driver.find_element_by_class_name("inputCss-input__label-263")
search.send_keys(Keys.RETURN)
time.sleep(5)
i try to extract this span
<span class="inputCss-input__label-263">Szukaj produktu</span>
I can see that you are trying to search something in the search bar.
First I recommend you to use the xpath instead of the class name, here is a simple technique to get the xpath of every element on a webpage:
right-click/ inspect element/ select the mouse in a box element on the upper left/ click on the element on the webpage/ it will directly show you the corresponding html/ then right click on the selected html/ copy options and then xpath.
Here is a code example that searches an element on the webpage, I also included the 'Webdriver-wait' option because sometimes the code runs to fast and can't find the next element so this function make the code wait till the element is visible:
from selenium import webdriver
from selenium.webdriver.common.by import By
from time import sleep
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome(executable_path="/Users/bobo/Downloads/chromedriver")
driver.get("https://neonet.pl") #loading page
wait = WebDriverWait(driver, 20) #defining webdriver wait
search_word = 'iphone\n' # \n is going to act as an enter key
wait.until(EC.visibility_of_element_located((By.XPATH, '//*[#id="root"]/main/div[1]/div[4]/div/div/div[2]/div/button[1]'))).click() #clicking on cookies popup
wait.until(EC.visibility_of_element_located((By.XPATH, '//*[#id="root"]/main/header/div[2]/button'))).click() #clicking on search button
wait.until(EC.visibility_of_element_located((By.XPATH, '//*[#id="root"]/aside[2]/section/form/label/input'))).send_keys(search_word) #searching on input button
print('done!')
sleep(10)
Hope this helped you!
wait=WebDriverWait(driver,10)
driver.get('https://neonet.pl')
elem=wait.until(EC.visibility_of_element_located((By.XPATH, "//span[contains(#class,'inputCss-input')]"))).text
print(elem)
To output the value of the search bar you use .text on the Selenium Webelement.
Imports:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

Python | Selenium Issue with scrolling down and find by class name

For one study research I would like to scrape some links from webpages which located out of viewport (to see this links you need to scroll down the page).
Page example (https://www.twitch.tv/lirik)
Link example: https://www.amazon.com/dp/B09FVR22R2
Link located in div class='Layout-sc-nxg1ff-0 itdjvg default-panel' (in total 16 links on the page).
I have write the script but I get empty list:
from selenium import webdriver
import time
browser = webdriver.Firefox()
browser.get('https://www.twitch.tv/lirik')
time.sleep(3)
browser.execute_script("window.scrollBy(0,document.body.scrollHeight)")
time.sleep(3)
panel_blocks = browser.find_elements(by='class name', value='Layout-sc-nxg1ff-0 itdjvg default-panel')
browser.close()
print(panel_blocks)
print(type(panel_blocks))
I just get empty list after page was loaded. Here is output from the script above:
/usr/local/bin/python /Users/greg.fetisov/PycharmProjects/baltazar_platform/Twitch_parser.py
[]
<class 'list'>
Process finished with exit code 0
p.s.
when webdriver opens the page, I see there is no scroll down action. It just open a page and then close it after time.sleep cooldown.
How I can change the script to get the links properly?
Any help or advice would be appreciated!
You are using a wrong locator.
You should use expected conditions explicit waits instead of hardcoded pauses.
find_elements method returns a list of web elements while you want to the link inside the element(s).
This should work better:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
browser = webdriver.Firefox()
browser.get('https://www.twitch.tv/lirik')
wait = WebDriverWait(browser, 20)
wait.until(EC.element_to_be_clickable((By.XPATH, "//div[#class='channel-panels-container']//a")))
time.sleep(0.5)
link_blocks = browser.find_elements_by_xpath("//div[#class='channel-panels-container']//a")
for link_block in link_blocks:
link = link_block.get_attribute("href")
print(link)
browser.close()
To print the values of the href attribute you have to induce WebDriverWait for the visibility_of_all_elements_located() and you can use either of the following Locator Strategies:
Using CSS_SELECTOR:
driver.get("https://www.twitch.tv/lirik")
print([my_elem.get_attribute("href") for my_elem in WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "div.Layout-sc-nxg1ff-0.itdjvg.default-panel > a")))])
Console Output:
['https://www.amazon.com/dp/B09FVR22R2', 'http://bs.serving-sys.com/Serving/adServer.bs?cn=trd&pli=1077437714&gdpr=$%7BGDPR%7D&gdpr_consent=$%7BGDPR_CONSENT_68%7D&adid=1085757156&ord=[timestamp]', 'https://store.epicgames.com/lirik/rumbleverse', 'https://bitly/3GP0cM0', 'https://lirik.com/', 'https://streamlabs.com/lirik', 'https://twitch.amazon.com/tp', 'https://www.twitch.tv/subs/lirik', 'https://www.youtube.com/lirik?sub_confirmation=1', 'http://www.twitter.com/lirik', 'http://www.instagram.com/lirik', 'http://gfuel.ly/lirik', 'http://www.cyberpowerpc.com/', 'https://www.cyberpowerpc.com/page/Intel/LIRIK/', 'https://discord.gg/lirik', 'http://www.amazon.com/?_encoding=UTF8&camp=1789&creative=390957&linkCode=ur2&tag=l0e6d-20&linkId=YNM2SXSSG3KWGYZ7']
Note : You have to add the following imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

how to resolve python selenium no such element error?

import time
driver = webdriver.Chrome()
driver.get("https://www.canva.com/q/pro-signup/")
time.sleep(6)
driver.switch_to.frame(driver.find_element_by_class_name('rbV9vo63iaj7sGd7XwS4h'))
elem = driver.find_element_by_name("//iframe[contains(#name, '_hjRemote')]")
It cant find element last line. i tried contains, starts with and indexing but none worked.
Try using different elements such as the xpath or the id. If that fails then you could select the element by the css selector. If that fails then you could always use a lib like pyautogui to physically click on the web element.
There are total of 6 iframes, The elements you are looking, they are inside
iframe[src^='https://www.canva.com/']
this iframe.
so you need to switch to this frame first :
driver.switch_to.frame(driver.find_element_by_css_selector("iframe[src^='https://www.canva.com/']"))
I would use the below code to click on Sign up with email:
driver = webdriver.Chrome()
driver.maximize_window()
driver.implicitly_wait(30)
driver.get("https://www.canva.com/q/pro-signup/")
wait = WebDriverWait(driver, 10)
wait.until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR, "iframe[src^='https://www.canva.com/']")))
wait.until(EC.element_to_be_clickable((By.XPATH, "//span[text()='Sign up with email']/.."))).click()
Imports :
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
In case you want to have a predefined iframe stored, you could something like this :
remote_vars_frame = driver.find_element_by_css_selector("iframe[id='_hjRemoteVarsFrame']")
driver.switch_to.frame(remote_vars_frame)
You can handle this Exception by directly searching for the element without switching to any iframe as below
elem = driver.find_element_by_name("//iframe[contains(#name, '_hjRemote')]")

Problem in finding the element using the xPath using selenium in python

I want to log-in the page and click the button which is located on top of the website but when i run code it says that element could not be found, below is my python code:
from selenium import webdriver
driver = webdriver.Chrome()
driver.get("https://go.xero.com/Dashboard")
user_name = driver.find_element_by_id("email")
user_name.send_keys("mnb#allied.ae")
password = driver.find_element_by_id("password")
password.send_keys()
Submit = driver.find_element_by_id("submitButton")
Submit.click()
CS = driver.find_element_by_xpath("//span[#class='xrh-appbutton--text']")
cs.click()
driver.quit()
below is the HTML source code:
Seems it requires some time to find the elements. To manage this you need to implement wait mechanism in your scripts. Use either implicit or explicit waits. refer this
Implicit Wait
driver.get("https://go.xero.com/Dashboard")
driver.implicitly_wait(15)
Explicit Wait
cs = WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//span[#class='xrh-appbutton--text']")))
cs.click()
Need to import following packages for this -
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait

How to click a button on a website by finding its id

When I run the code, the website loads up fine but then it won't click on the button- an error appears saying the element is not interacterble. What do I need to do to click the button? I am relatively new to this and would be grateful for any help.
I have already tried finding it by id and tag.
page = driver.get("https://kenpreston.co.uk/author/")
element = driver.find_element_by_id('mk-button-31')
element.click()
SOLVED:
I used driver.find_element_by_link_text and this worked fine.
I have checked the website and noticed that mk-button-31 is an id for a div tag and inside it there is an a tag. Try getting the url from the a tag and do another driver.get instead of clicking on it.
Also the whole div tag is not clickable so that is why you are getting this error.
Use sleep from time library to be sure page fully loaded
from time import sleep
page = driver.get("https://kenpreston.co.uk/author/")
sleep(2)
element = driver.find_element_by_id('mk-button-31')
element.click()
Looks like your element is not clickable you need to replace this id selector with the css and need to wait for the element before click on it.
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
page = driver.get("https://kenpreston.co.uk/author/")
element = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "#mk-button-31 span"))
element.click();
Consider adding Explicit Wait to your script as it might be the case the DOM had finished loading and the button you're looking for is still not there.
The classes you're looking for are:
WebDriverWait
expected_conditions
Suggested code change:
#your other imports here
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions
#your other code here
page = driver.get("https://kenpreston.co.uk/author/")
element = WebDriverWait(driver, 10).until(expected_conditions.element_to_be_clickable((By.ID, "mk-button-31")))
element.click()
More information: How to use Selenium to test web applications using AJAX technology

Categories