Selenium/Python - Finding Dynamically Created Fields - python

Newbie here... 2 days into learning this.
In a learning management system, there is an element (a plus mark icon) to click which adds a form field upon each click.  The goal is to click the icon, which generates a new field, and then put text into the new field.  This field does NOT exist when the page loads... it's added dynamically based on the clicking of the icon.
When I try to use "driver.find_element_by_*" (have tried ID, Name and xpath), I get an error that it can't be found. I'm assuming it's because it wasn't there when the page loaded. Any way to resolve this?
By the way, I've been successful in scripting the login process and navigating through the site to get to this point. So, I have actually learned how to find other elements that are static.
Let me know if I need to provide more info or a better description.
Thanks,
Bill

Apparently I needed to have patience and let something catch up...
I added:
import time
and then:
time.sleep(3)
after the click on the icon to add the field. It's working!

You can use time.sleep(3) but that would force you to wait for the entire 3 seconds before using that element. In Selenium we use webdriver waits that polls the DOM to allow us to immediately use that element as quick as possible when it is useable.
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome()
wait = WebDriverWait(driver, 10)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,""))).click()

Related

Accepting cookies popups using selenium in python

I'm trying to put together a small scraper for public trademark data. I have a database available that i'm using selenium and python to access.
I can do just about anything I need to be able to, but for some reason i can't actually click the "accept cookies" button on the website. The following code i use highlights the button, but it does not get rid of the popup.
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
driver = webdriver.Chrome(executable_path=DRIVER_PATH)
driver.get('https://data.inpi.fr/recherche_avancee/marques')
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "tarteaucitronPersonalize2"))
).click()
I have looked up similar threads on this forum, and I have tried multiple things :
- adding a waiting period, that ended up highlighting the button, so at least i know it does something
- using javascript code to do the actual click, did not work any better
- i tried calling the button via its ID, its XPATH, its CSS selector, anything i could find really
I even downloaded Selenium IDE to record my clicks to see exactly how I could replicate it, but it still only recorded a click.
I tried my best, does anyone know where my mistake lie ? I am open to using other languages, or another platform
Well it looks like I managed to solve it just minutes after posting my question !
For some reason you need to resize the window. I just added the following line of code after opening the URL and it worked first time.
driver.maximize_window()
I added this answer in case anyone stumbles upon this post and wants to avoid pulling their hair out over this !

WebScraping through multiple sites with Selenium

I'm using Selenium in a project that consists of opening a range of websites, that contains pretty much the same structure, collecting data in each site and storing it.
The problem I ran into is that some of the sites I wan't to access are unavailable, and when the program get to one of those it just stops.
What I want it to do, is to skip those and follow on with the next iterations, but so far my tries have been obsolete... In my latest try I used the method is_displayed(), but apparently it will only tell me if an element is visible or not, instead of telling me if it's present or not.
if driver.find_element_by_xpath('//*[#id="main-2"]/div[2]/div[1]/div[1]/div/div[1]/strong').is_displayed():
The example above doesn't work, because the driver needs to find the element before telling me if it visible or not, but the element is simply not there.
Have any of you dealt with something similar?
How one of the sites looks like normally
How it looks like when it is unavailable
You can use Selenium expected conditions waiting for element presence.
I'm just giving an example below.
I have defined the timeout for 5 seconds here, but you can use any timeout value.
Also, your element locator looks bad.
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
element_xpath_locator = '//*[#id="main-2"]/div[2]/div[1]/div[1]/div/div[1]/strong'
wait = WebDriverWait(browser, 5)
wait.until(EC.presence_of_element_located((By.XPATH, element_xpath_locator)))

Selenium - Issue Getting/Clicking Button Element

I have been messing around with python and selenium and was trying to make a program that could automatically checkout on bestbuy.ca . I have gotten to nearly the last stage of the checkout process but am having difficulty getting the element/clicking the buttons to continue. The only button I have been able to get to work is the add to cart button on the product page.
Here is the site as well as the HTML:
Click here for image
Some things I have tried with no success include:
driver.find_element_by_class_name().click()
For this, I used the class in button and span. I have tried using each individual class in separately and together with little success. It would usually give me a "No Element Found" error or just not click anything afterwards.
driver.driver.find_element_by_xpath().click()
For this, I tried using the element locator extension for the xpath as well as "//*[#type='submit']" with no success.
driver.driver.find_element_by_css_selector().click()
For this, I tried the element locator extension again as well as copying it from the firefox inspect with no success.
Not really sure if there are any other ways to go about doing this but any help would be appreciated.
Did you try to copy the entire class attribute from the html and use :
wait.until(EC.presence_of_element_located((By.XPATH, "//button[#class='asd asd sad asd']"))).click()
copy the class and paste it.
Try either of the following to click on the button tag.
wait=WebDriverWait(driver, 10)
wait.until(EC.element_to_be_clickable((By.XPATH, "//span[text()='Continue']/parent::button"))).click()
Or by attribute data automation
wait.until(EC.element_to_be_clickable((By.XPATH, "//button[#data-automation='review-order']"))).click()
Import
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

How to handle load more button in selenium python

I am new to python and i am trying to automate click of load more button present in comment section of instagram using selenium and python. I am able to click only once. After first click the button disappears, I have waited for 10 minutes but it did not appear again. And also the status of the request changed to 302 when click in performed using automation. The status code remain 200 when clicked manually. Please help me guys how to perform click until all comment have been loaded. Any help will be appreciated. here is my code
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
import time
options = webdriver.FirefoxOptions()
options.add_argument('start-maximized')
browser = webdriver.Firefox(executable_path='./drivers/geckodriver')
browser.maximize_window()
url = 'https://www.instagram.com/p/CKCVIu2gDgn'
browser.get(url)
browser.implicitly_wait(10) #wait for 10 sec
load_more = browser.find_element(By.XPATH,'/html/body/div[1]/section/main/div/div[1]/article/div[3]/div[1]/ul/li/div/button/span').click()
For something like this, you might want to use Selenium's excute_script feature, which basically lets you execute Javascript on the webpage.
The following Javascript code will find the button in the comments and click it.
document.querySelector("button.dCJp8.afkep").click()
For such a feat, you would not want to use Selenium to click it because Selenium tries to mimic human behaviour as much as possible. In this case, Selenium would try to find the button, and if it is not visible on the page (ie. you need to scroll to find it), it would raise an Exception.
The following is a line of Python code you can use in your program to click the load more button.
driver.execute_script("document.querySelector('button.dCJp8.afkep').click()")
By the way, I am assuming that this round circular plus sign button is the click target.
Use instaloader library, it handles such stuff very easily.
here is the full documentation.

How to periodically re-check a webpage using selenium in python

I am new to selenium in python (and all web-interface applications of python) and I have a task to complete for my present internship.
My script successfully navigates to an online database and inputs information from my data tables, but then the webpage in question takes anywhere from 30 seconds to several minutes to compute an output.
How do I go about instructing python to re-check the page every 30 seconds until the output appears so that I can parse it for the data I need? For instance, which functions might be I start with?
This will be part of a loop repeated for over 200 entries, and hundreds more if I am successful so it is worth my time to automate it.
Thanks
You should use Seleniums Waits as pointed by G_M and Sam Holloway.
One which I most use is the expected_conditions:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Firefox()
driver.get("http://somedomain/url_that_delays_loading")
try:
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "myDynamicElement"))
)
finally:
driver.quit()
It will wait until there is an element with id "myDynamicElement" and then execute the try block, which should contain the rest of your work.
I prefer to use the the class By.XPATH, but if you use By.XPATH with the method presence_of_element_located add another () so it will be the required tuple as noted in this answer:
from selenium.webdriver.common.by import By
driver.find_element(By.XPATH, '//button[contains(text(),"Some text")]')
driver.find_element(By.XPATH, '//div[#id="id1"]')
driver.find_elements(By.XPATH, '//a')
The easiest way to find (for me) the XPATH of an element is going to the developer mode in chrome (F12), pressing ctrl+F, and using the mouse with inspect, trying to compose the proper XPATH, which will be specific enough to find just the expected element, or the least number of elements as possible.
All the examples are from (or based) the great selenium documentation.
If you just want to space out checks, the time.sleep() function should work.
However, as G_M's comment says, you should look into Selenium waits. Think about this: is there an element on the page that will indicate that the result is loaded? If so, use a Selenium wait on that element to make sure your program is only pausing until the result is loaded and not wasting any time afterwards.

Categories