Given this code:
options = webdriver.ChromeOptions()
options.add_argument("headless")
driver = webdriver.Chrome(options=options)
driver.get('https://covid19.apple.com/mobility')
elements = driver.find_elements_by_css_selector("div.download-button-container a")
csvLink = [el.get_attribute("href") for el in elements]
driver.quit()
At the end, csvLink sometimes has the link and most times not. If I stop at the last line in the debugger, it often fails to have anything in csvlink, but if I manually execute (in the debugger) elements[0].get_attribute('href') the correct link is returned. Every time.
if I replace
csvLink = [el.get_attribute("href") for el in elements]
with a direct call -
csvLink = elements[0].get_attribute("href")
it also fails. But, again, if I'm stopped at the driver.quit() line, and manually execute, the correct link is returned.
is there a time or path dependency I'm unaware of in using Selenium?
I'm guessing it has to do with how and when the javascript loads the link. Selenium grabs it without waiting before the javascript is able to load the elements href attribute value. Try explicitly waiting for the selector, something like:
(
WebDriverWait(browser, 20)
.until(EC.presence_of_element_located(
(By.CSS_SELECTOR, "div.download-button-container a[href]")))
.click()
)
Reference:
Selenium - wait until element is present, visible and interactable
How do I target elements with an attribute that has any value in CSS?
Also, if you curl https://covid19.apple.com/mobility my suspicion would be that the element exists ( maybe ), but the href is blank?
Related
I want to log into a forum, already bypassed the cookie message by switching to its iframe but I can't get access to anything on the page. This is my code:
#set path, open firefox
path = r'C:\Users\Anwender\Downloads\geckodriver-v0.30.0-win64\geckodriver.exe'
driver = webdriver.Firefox(executable_path=path)
#open bym forum
driver.get('https://www.bym.de/forum/')
driver.maximize_window()
#deal with cookie
wait = WebDriverWait(driver,20)
wait.until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR,"iframe[title='SP Consent Message']")))
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,"button[title='Zustimmen']"))).click()
driver.implicitly_wait(30)
So far, so good. Now a ton of ads pop up. I try to get to the login:
username = driver.find_element_by_id('navbar_username')
username.send_keys("name")
password = driver.find_element_by_id('navbar_password')
password.send_keys("pw")
driver.find_element_by_xpath("/html/body/div[1]/div[2]/main/div[3]/div[1]/ul/li[2]/form/fieldset/div/div/input[4]").click()
I have tried different variants of this using the css selector or the xpath. Didn't work. Then I tried waiting 20 seconds till everything has loaded. Didn't work. I tried accessing a different element (to a subforum). Didn't work.
I tried:
try:
wait.until(
EC.presence_of_element_located((By.ID, "navbar_username"))
)
finally:
driver.quit()
Selenium just can not find this element, the browser just closed. I have looked for iframes but I couldn't find any in the html. Is it possible I am not even on the main "window"? "Frame"? "Site"? I don't know what to call it.
Any help would be much appreciated!
You switched to an iframe and closed it, but you never switched back! Selenium is still on the closed and absent iframe. Switch back to the mainpage using:
driver.switch_to.default_content()
I am using python3.8.5 & selenium3.141.0 to automate login processes. To am locating login buttons with css selectors, but in my testing, the selectors can change (i'm a little fuzzy on this - something dynamic to do with how the page loads??).
My current solution for this is iterate over a list of css selectors that I have observed (they do repeat):
driver = webdriver.Chrome(options=options)
success = True
errorMSG = ""
for loginClickID in paper.loginClickID:
wait = WebDriverWait(driver, 12)
try:
wait.until(
EC.presence_of_element_located(By.CSS_SELECTOR, loginClickID)
)
except Exception as e:
log("\nFailed to see click-to-login element: " + loginClickID + "\nHTML output shown below:\n\n" + driver.page_source)
success = False
errorMSG = e
if not success:
response = driver.page_source
driver.quit()
return f"{paper.brand} login_fail\nLoginClickID: {loginClickID}\nNot found in:\n{response}\n{errorMSG}\n"
I am creating a new WebDriverWait() object for each iteration of the loop. However, when I debug the code and step over it manually, the second time I enter the loop the wait.until() method exits immediately, without even throwing an exception (which is very strange, right?) and exiting the loop completely (the css selector list has 2 elements)
My thought is that somehow the wait.until() timer is not reseting?
I've tried reloading the page using driver.refresh(), and sleeping the python code using time.sleep(1) in the except: section in that hope that might help rest things - it has not...
I've included all my ChromeDriver options for context:
options = Options()
# options.add_argument("--headless")
options.add_argument("window-size=1400,1500")
options.add_argument("--disable-gpu")
options.add_argument("--no-sandbox")
options.add_argument("enable-automation")
options.add_argument("--disable-infobars")
options.add_argument("--disable-dev-shm-usage")
options.add_argument("start-maximized")
options.add_argument("--disable-browser-side-navigation")
I am using:
Google Chrome 80.0.3987.106
&
ChromeDriver 80.0.3987.106
Any Suggestions?
Why use
css selectors
Which are obvious dynamic, as you say - they change..
Why don't you use Xpath?
xpath_email = "//input[#type='email']"
xpath_password = "//input[#type='password']"
driver.find_element_by_xpath(xpath_email)
time.sleep(1)
driver.find_element_by_xpath(xapth_password)
These are obviously, generic xpaths, but you can find them on whatever login page you are, and change accordingly.
This way, no matter whatever the test case, your logic will work.
I am trying to run a script in selenium webdriver python. Where I am trying to click on search field, but its always showing exception of "An element could not be located on the page using the given search parameters."
Here is script:
from selenium import webdriver
from selenium.webdriver.common.by import By
class Exercise:
def safari(self):
class Exercise:
def safari(self):
driver = webdriver.Safari()
driver.maximize_window()
url= "https://www.airbnb.com"
driver.implicitly_wait(15)
Title = driver.title
driver.get(url)
CurrentURL = driver.current_url
print("Current URL is "+CurrentURL)
SearchButton =driver.find_element(By.XPATH, "//*[#id='GeocompleteController-via-SearchBarV2-SearchBarV2']")
SearchButton.click()
note= Exercise()
note.safari()
Please Tell me, where I am wrong?
There appears to be two matching cases:
The one that matches the search bar is actually the second one. So you'd edit your XPath as follows:
SearchButton = driver.find_element(By.XPATH, "(//*[#id='GeocompleteController-via-SearchBarV2-SearchBarV2'])[2]")
Or simply:
SearchButton = driver.find_element_by_xpath("(//*[#id='GeocompleteController-via-SearchBarV2-SearchBarV2'])[2]")
You can paste your XPath in Chrome's Inspector tool (as seen above) by loading the same website in Google Chrome and hitting F12 (or just right click anywhere and click "Inspect"). This gives you the matching elements. If you scroll to 2 of 2 it highlights the search bar. Therefore, we want the second result. XPath indices start at 1 unlike most languages (which usually have indices start at 0), so to get the second index, encapsulate the entire original XPath in parentheses and then add [2] next to it.
I have a code that tell Selenium to wait until an element is clickable but for some reason, Selenium doesnt wait but instead, click that element and raise a Not clickable at point (x, y) immediately. Any idea how to fix this ?
x = '//*[#id="arrow-r"]/i'
driver = webdriver.Chrome(path)
driver.get('https://www.inc.com/inc5000/list/2017')
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, x)))
driver.find_element_by_xpath(x).click()
EC.element_to_be_clickable check if element is visible and enabled. In terms of visibility it doesn't cover scenario when element is behind other. Maybe your page use something like blockUI widget and click() occurs before the cover disappears. You can check if element is truly clickable by enriching EC.element_to_be_clickable((By.XPATH, x)) check with assertion that ensure element is not behind by other. In my projects I use implementation as below:
static bool IsElementClickable(this RemoteWebDriver driver, IWebElement element)
{
return (bool)driver.ExecuteScript(#"
(function(element){
var rec = element.getBoundingClientRect();
var elementAtPosition = document.elementFromPoint(rec.left+rec.width/2, rec.top+rec.height/2);
return element == elementAtPosition || element.contains(elementAtPosition);
})(arguments[0]);
", element);
}
This code is in C# but I'm sure you can easily translate into your programming language of choice.
UPDATE:
I wrote a blog post about problems related to clicking with selenium framework https://cezarypiatek.github.io/post/why-click-with-selenium-so-hard/
Here is a link to the 'waiting' section for the Python Selenium docs: Click here
Wait will be like:
element = WebDriverWait(driver, 10).until(
EC.visibility_of((By.XPATH, "Your xpath"))
)
element.click();
So I have been using selenium to open a webpage and wait for a specific element to be loaded. Once that's loaded, I'm finding an element within the first element that's already loaded and getting a value from that element. But, every time I run the code I get a StaleElementReferenceException on the line that says price = float(...). This is weird to me because it didn't crash on the line before which has the same xpath search. I'm very new to this, any help would be appreciated!
browser.get(url + page)
element = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.ID, "searchResultsTable")))
price_element = element.find_element_by_xpath("//div[#class='market_listing_row market_recent_listing_row market_listing_searchresult']")
price = float(price_element.find_element_by_xpath("//span[#style='color:white']").text[:1])