Executing an event from python - python

As a beginner with python I am trying to make a simple automated login project. One more thing I have to do is to mouse click on the 4th row of html table to show me proper content. The html code of that segment is:
<tr class="tbl_seznam_barva_1" onclick="setTimeout('__doPostBack(\'ctl02$ctl00$BrowseSql1\',\'Select$0\')',470);" onmouseover="radekSeznamuClass=this.className;this.className='RowMouseOver';" onmouseout="this.className=radekSeznamuClass;">
<td>virtuálny terminál</td>
</tr>
How to execute this "onclick" event?
from selenium import webdriver
#...
browser = webdriver.Firefox()
elem = browser.find_element_by_name('txtUsername')
elem.send_keys('myLogin' + Keys.RETURN)
elem = browser.find_element_by_xpath("//tr[4]")
# some code for event execution goes here...

If you want to click() on the element with text as virtuálny terminál you can achieve it with:
browser.find_element_by_xpath("//*[text()='virtuálny terminál']").click()
If you need to click on more elements you can use a for-loop on all the elements.
elements = browser.find_element_by_xpath("//tr[4]")
for i in elements:
print(i.text)
Edit:
You can use ActionChains:
from selenium import webdriver
from selenium.webdriver import ActionChains
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
browser = webdriver.Firefox()
my_elem = browser.find_element_by_xpath("//tr[4]")
action = ActionChains(browser)
action.move_to_element(my_elem)
# action.move_to_element_with_offset(my_elem, 5, 5)
action.click()
action.perform()
Edit2:
If you can't use chromedriver and you have nothing else to do you can use execute_script:
element = browser.find_element_by_xpath("//tr[4]")
browser.execute_script("arguments[0].click();", element)

The problem is that one should wait for webpage to fully load
After the line elem.send_keys('myLogin' + Keys.RETURN) the webpage needs time to render a content, so a delay should by added:
import time
# ...
elem.send_keys('myLogin' + Keys.RETURN)
time.sleep(1)
elem=browser.find_element_by_xpath("//tr[4]")
elem.click()

Related

Do not find the element in the webpage

Can not find the element
The code was writen using python with visual studio code
from time import time
import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
driver = webdriver.Chrome()
paginaHit = 'https://hit.com.do/solicitud-de-verificacion/'
driver.get(paginaHit)
driver.maximize_window()
time.sleep(5)
bl = 'SMLU7318830A'
elementoBL = driver.find_element(By.XPATH, '//*[#id="billoflanding"]').send_keys(bl)
# WebDriverWait(driver,2).until(EC.element_to_be_clickable((By.NAME, "bl"))).Click()
The code is OK, but can not find the element in the webpage.
The portion of the page you are trying to access is inside an EMBED tag. It looks similar to an IFRAME so I would start by switching the context to the EMBED tag and then try searching for the element.
driver = webdriver.Chrome()
paginaHit = 'https://hit.com.do/solicitud-de-verificacion/'
driver.get(paginaHit)
driver.maximize_window()
embed = driver.find_element(By.CSS_SELECTOR, "embed")
driver.switch_to.frame(embed)
bl = 'SMLU7318830A'
wait =WebDriverWait(driver, 20)
wait.until(EC.visibility_of_element_located((By.ID, "billoflanding"))).send_keys(bl)
Couple of additional points:
Don't use sleeps... sleeps are a bad practice. Instead use WebDriverWait when you need to wait for something to happen.
If you are using an ID to find an element, use By.ID and not XPath. ID should be preferred, when available. Next should be a CSS selector and then finally, XPATH only when needed, e.g. to locate elements by contained text or to do complicated DOM traversal.

Use Python Selenium to extract span text

Hi i'm new at selenium and webscraping and i need some help.
i try to scrape one site and i need and i dont know how to get span class.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
PATCH = "/Users/bobo/Downloads/chromedriver"
driver = webdriver.Chrome(PATCH)
driver.get("https://neonet.pl")
print(driver.title)
search = driver.find_element_by_class_name("inputCss-input__label-263")
search.send_keys(Keys.RETURN)
time.sleep(5)
i try to extract this span
<span class="inputCss-input__label-263">Szukaj produktu</span>
I can see that you are trying to search something in the search bar.
First I recommend you to use the xpath instead of the class name, here is a simple technique to get the xpath of every element on a webpage:
right-click/ inspect element/ select the mouse in a box element on the upper left/ click on the element on the webpage/ it will directly show you the corresponding html/ then right click on the selected html/ copy options and then xpath.
Here is a code example that searches an element on the webpage, I also included the 'Webdriver-wait' option because sometimes the code runs to fast and can't find the next element so this function make the code wait till the element is visible:
from selenium import webdriver
from selenium.webdriver.common.by import By
from time import sleep
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome(executable_path="/Users/bobo/Downloads/chromedriver")
driver.get("https://neonet.pl") #loading page
wait = WebDriverWait(driver, 20) #defining webdriver wait
search_word = 'iphone\n' # \n is going to act as an enter key
wait.until(EC.visibility_of_element_located((By.XPATH, '//*[#id="root"]/main/div[1]/div[4]/div/div/div[2]/div/button[1]'))).click() #clicking on cookies popup
wait.until(EC.visibility_of_element_located((By.XPATH, '//*[#id="root"]/main/header/div[2]/button'))).click() #clicking on search button
wait.until(EC.visibility_of_element_located((By.XPATH, '//*[#id="root"]/aside[2]/section/form/label/input'))).send_keys(search_word) #searching on input button
print('done!')
sleep(10)
Hope this helped you!
wait=WebDriverWait(driver,10)
driver.get('https://neonet.pl')
elem=wait.until(EC.visibility_of_element_located((By.XPATH, "//span[contains(#class,'inputCss-input')]"))).text
print(elem)
To output the value of the search bar you use .text on the Selenium Webelement.
Imports:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

ElementNotInteractableException: element not interactable in Selenium

I am trying to get the review of a certain product but it returns an error.
My code:
import selenium
from selenium import webdriver
chrome_path = r"C:\Users\AV\AppData\Local\Programs\Python\Python39\Scripts\chromedriver.exe"
driver = webdriver.Chrome(chrome_path)
driver.get("https://oldnavy.gapcanada.ca/browse/product.do?pid=647076053&cid=1180630&pcid=26190&vid=1&nav=meganav%3AWomen%3ADeals%3ASale&grid=pds_0_1034_1#pdp-page-content")
driver.execute_script("window.scrollTo(0, 1000)")
import time
from time import sleep
sleep(5)
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "div.promoDrawer__handlebar__icon"))).click()
review = driver.find_elements_by_class_name("pr-rd-description-text")
for post in review:
print(post.text)
driver.find_element_by_xpath('//*[#id="pr-review-display"]/footer/div/div/a').click()
review2 = driver.find_elements_by_class_name("pr-rd-description-text")
for post in review2:
print(post.text)
It returns: selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable
Can you please tell me what should I do?
The button is really hard to click.
I guess it could be achieved by adding some more waits and moving with ActionChains class methods.
I could click it with Javascript code with no problems.
What it does:
1 Scrolls to the Next button
2 Clicks it.
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.common.action_chains import ActionChains
driver = webdriver.Chrome(executable_path='/snap/bin/chromium.chromedriver')
driver.get(
"https://oldnavy.gapcanada.ca/browse/product.do?pid=647076053&cid=1180630&pcid=26190&vid=1&nav=meganav%3AWomen%3ADeals%3ASale&grid=pds_0_1034_1#pdp-page-content")
driver.execute_script("window.scrollTo(0, 1000)")
wait = WebDriverWait(driver, 20)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "div.promoDrawer__handlebar__icon"))).click()
wait.until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, ".pr-rd-description-text")))
review = driver.find_elements_by_css_selector(".pr-rd-description-text")
for post in review:
print(post.text)
# wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, ".pr-rd-pagination-btn"))).click()
element = driver.find_element_by_css_selector(".pr-rd-pagination-btn")
# actions = ActionChains(driver)
# actions.move_to_element(element).click().perform()
driver.execute_script("arguments[0].scrollIntoView();", element) # Scrolls to the button
driver.execute_script("arguments[0].click();", element) # Clicks it
print("clicked next")
I also rearranged your code, moved imports to the beginning of the file, got rid of unpredictable time.sleep() and used more reliable css locators. However, your locator should also work.
I left the options I tried commented out.
That element is weird. Even when I scroll into view, use actions to click it, or execute javascript to click, it doesn't work. What I would suggest is just grabbing the href attribute from the element and going to that URL, using something like this:
driver.get(driver.find_element_by_xpath('//*[#id="pr-review-display"]/footer/div/div/a').get_attribute('href'))

Selenium is returning empty text for elements that definitely have text

I'm practicing trying to scrape my university's course catalog. I have a few lines in Python that open the url in Chrome and clicks the search button to bring up the course catalog. When I go to extract the texting using find_elements_by_xpath(), it returns blank. When I use the dev tools on Chrome, there definitely is text there.
from selenium import webdriver
import time
driver = webdriver.Chrome()
url = 'https://courses.osu.edu/psp/csosuct/EMPLOYEE/PUB/c/COMMUNITY_ACCESS.OSR_CAT_SRCH.GBL?'
driver.get(url)
time.sleep(3)
iframe = driver.find_element_by_id('ptifrmtgtframe')
driver.switch_to.frame(iframe)
element = driver.find_element_by_xpath('//*[#id="OSR_CAT_SRCH_WK_BUTTON1"]')
element.click()
course = driver.find_elements_by_xpath('//*[#id="OSR_CAT_SRCH_OSR_CRSE_HEADER$0"]')
print(course)
I'm trying to extract the text from the element 'OSU_CAT_SRCH_OSR_CRSE_HEADER'. I don't understand why it's not returning the text values especially when I can see that it contains text with dev tools.
You are not using text that is the reason you are not getting the text.
course = driver.find_elements_by_xpath('//*[#id="OSR_CAT_SRCH_OSR_CRSE_HEADER$0"]').text
Try above changes in last second line
Below is the full code after the changes
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
driver = webdriver.Chrome()
url = 'https://courses.osu.edu/psp/csosuct/EMPLOYEE/PUB/c/COMMUNITY_ACCESS.OSR_CAT_SRCH.GBL?'
driver.get(url)
time.sleep(3)
iframe = driver.find_element_by_id('ptifrmtgtframe')
driver.switch_to.frame(iframe)
element = driver.find_element_by_xpath('//*[#id="OSR_CAT_SRCH_WK_BUTTON1"]')
element.click()
# wait 10 seconds
course = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.XPATH, '//*[#id="OSR_CAT_SRCH_OSR_CRSE_HEADER$0"]'))
).text
print(course)

Selenium Python: How to wait for a page to load after a click?

I want to grab the page source of the page after I make a click. And then go back using browser.back() function. But Selenium doesn't let the page fully load after the click and the content which is generated by JavaScript isn't being included in the page source of that page.
element[i].click()
#Need to wait here until the content is fully generated by JS.
#And then grab the page source.
scoreCardHTML = browser.page_source
browser.back()
As Alan mentioned - you can wait for some element to be loaded. Below is an example code
from selenium import webdriver
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
browser = webdriver.Firefox()
element = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.ID, "element_id")))
you can also use seleniums staleness_of
from selenium.webdriver.support.expected_conditions import staleness_of
def wait_for_page_load(browser, timeout=30):
old_page = browser.find_element_by_tag_name('html')
yield
WebDriverWait(browser, timeout).until(
staleness_of(old_page)
)
You can do it using this method of a loop of try and wait, an easy to implement method
from selenium import webdriver
browser = webdriver.Firefox()
browser.get("url")
Button=''
while not Button:
try:
Button=browser.find_element_by_name('NAME OF ELEMENT')
Button.click()
except:continue
Assuming "pass" is an element in the current page and won't be present at the target page.
I mostly use Id of the link I am going to click on. because it is rarely present at the target page.
while True:
try:
browser.find_element_by_id("pass")
except:
break

Categories