Using python, selenium, and firefox. I am clicking a link on a homepage and it leads directly to a JPG file that loads. I just want to verify that the image loads. The HTML of the image is this:
<img src="https://www.treasury.gov/about/organizational-structure/ig/Agency%20Documents/Organizational%20Chart.jpg" alt="https://www.treasury.gov/about/organizational-structure/ig/Agency%20Documents/Organizational%20Chart.jpg">
I am trying to use xpath for locating the element:
def wait_for_element_visibility(self, waitTime, locatorMode, Locator):
element = None
if locatorMode == LocatorMode.XPATH:
element = WebDriverWait(self.driver, waitTime).\
until(EC.visibility_of_element_located((By.XPATH, Locator)))
else:
raise Exception("Unsupported locator strategy.")
return element
Using this dictionary:
OrganizationalChartPageMap = dict(OrganizationalChartPictureXpath = "//img[contains(#src, 'Chart.jpg')]",
)
This is the code I am running:
def _verify_page(self):
try:
self.wait_for_element_visibility(20,
"xpath",
OrganizationalChartPageMap['OrganizationalChartPictureXpath']
)
except:
raise IncorrectPageException
I get the incorrectpageexception thrown every time. Am I doing this all wrong? Is there a better way to verify images using selenium?
Edit : Here is the DOM of the elements :
Appending the alt value should work in xpath, would suggest you to change the dictionary to :
= dict(OrganizationalChartPictureXpath = "//img[#alt='https://www.treasury.gov/about/organizational-structure/ig/Agency%20Documents/Organizational%20Chart.jpg' and contains(#src, 'Chart.jpg')]"
OR
alternatively use the full path to the image in the src as :
= dict(OrganizationalChartPictureXpath = "//img[#src='https://www.treasury.gov/about/organizational-structure/ig/Agency%20Documents/Organizational%20Chart.jpg']"
Edit :
According to the DOM shared in the image, you can also use the class of the img which would be a corresponding code into your project to this :
element = driver.find_element_by_class_name('shrinkToFit')
Related
In selenium python code i can click to WebElement (e.g post_raw.click())
Can I identify WebElement link (which will be clicked) with help of selenium methods?
I know about driver.current_url but I am lookink for link before click. I was looking in documentation, but don't find solution https://www.selenium.dev/selenium/docs/api/py/webdriver_remote/selenium.webdriver.remote.webelement.html
My code example:
from selenium import webdriver
# login to facebook code
driver.get("https://touch.facebook.com/dota2")
posts_raw = driver.find_elements_by_xpath("//article")
post_raw = posts_raw[0]
print(type(post_raw)) # <class 'selenium.webdriver.remote.webelement.WebElement'>
post_raw.click() # how can I get post_raw link (which was clicked in this line)
I want function like this:
def get_url_from_WebElement(web_elem: WebElement) -> str:
You can try get href from xpath.
Example :
I want to get link form "question" so i add .get_attribute("href") to find_element
question_link = driver.find_element(By.XPATH, "/html/body/div[3]/div[2]/div/div[1]/div[3]/div[3]/h2[1]/a[1]").get_attribute("href")
if we print it we get :
https://stackoverflow.com/questions/71371250/selenium-get-object-url
Let's say I have a webelement and I want to use that webelement to get the a tag. This does not get the element clicked which is just the middle of the web_element generally.
def get_url_from_WebElement(web_element):
try:
elem=web_element.find_element(BY.XPATH,".//a").get_attribute("href")
return elem
except:
return ''
Do you mean:
def get_url_from_webelement(xpath):
the_url = driver.find_element(By.XPATH, "\"" + xpath + "\"").get_attribute("href")
return the_url
There's this element which manually I click so I can fill out the form and generate a file for download.
As I said, manually it's a clickable element that works just by clicking it. Here's the problem:
Clicking-in the element won't work, selenium is giving me the type of error saying the element is not iterable, I've tried also tried changing the 'class' attribute, which by what I saw, is what changes in the Html when I click it with Chrome prompt opened.
This is the HTML without clicking it(second image):
This is the Html when I click it(third image):
edit 14/01/2022 09_54:
The code i tried was:
element = ''
while element == '':
try:
element = nvg.find_element_by_xpath('/html/body/div[2]/div[1]/div[2]/div[2]/form/div[1]/label[2]')
except:
element = ''
"""so after this loop it is actually able to find and referenciate the element"""
nvg.find_element_by_xpath('/html/body/div[2]/div[1]'
'/div[2]/div[2]/form/div[1]/label[1]').__setattr__('class', "fancy_radio inline fancy_unchecked")
print(element.get_attribute('class'))
"""it prints the 'fancy_radio inline fancy_checked' string"""
element.__setattr__('class', "fancy_radio inline fancy_checked")
"""returns and changes nothing"""
element.click()
"""raises the not iterable element"""
I am trying to paste some text from the clipboard into a hidden textarea input element on a website with Playwright, but I keep getting issues because the element has attribute:
style="visibility:hidden, display:none;"
I am trying to resolve that with page.evaluate command in Playwright, but can't seem to change the element visibility status. Here is the returned error:
page.evaluate("[id=txbLongDescription] => document.querySelector('[id=txbLongDescription]')", style='visibility:visible')
TypeError: Page.evaluate() got an unexpected keyword argument 'style'
Here is my code so far:
def run(playwright: Playwright) -> None:
browser = playwright.chromium.launch(headless=False)
context = browser.new_context(accept_downloads=True)
page = context.new_page()
# Click in description input field
#paste description from clipboard
paste = pc.paste()
page.evaluate("[id=txbLongDescription] => document.querySelector('[id=txbLongDescription]')", style='visibility:visible')
page.fill('textarea[id=\"txbLongDescription\"]', f'{paste}')
#---------------------
context.close()
browser.close()
with sync_playwright() as playwright:
run(playwright)
print('Done')
Here is my solution:
page.eval_on_selector(
selector="textarea", # Modify the selector to fit yours.
expression="(el) => el.style.display = 'inline-block'",
)
Here is the document of eval_on_selector().
There are two pop-ups: One that asks if you live in California and the second one looks like this:
Here is my code:
The second pop-up doesn't show up every time and when it doesn't the function works. When it does I get an element not interactable error and I don't know why. Here is the inspector for the second pop-up close-btn.
test_data = ['Los Angeles','San Ramon']
base_url = "https://www.hunterdouglas.com/locator"
def pop_up_one(base_url):
driver.get(base_url)
try:
submit_btn = WebDriverWait(driver,5).until(EC.element_to_be_clickable((By.CSS_SELECTOR,"[aria-label='No, I am not a California resident']")))
submit_btn.click()
time.sleep(5)
url = driver.current_url
submit_btn = WebDriverWait(driver,5).until(EC.presence_of_element_located((By.XPATH, "//*[contains(#class,'icon')]")))
print(submit_btn.text)
#submit_btn.click()
except Exception as t:
url = driver.current_url
print(t)
return url
else:
url = driver.current_url
print("second pop_up clicked")
return url
I have tried selecting by the aria-label, class_name, xpath, etc. the way I have it now shows that there is a selenium web element when I print just the element but it doesn't let me click it for some reason. Any direction appreciated. Thanks!
There are 41 elements on that page matching the //*[contains(#class,'icon')] XPath locator. At least the first element is not visible and not clickable, so when you trying to click this submit_btn element this gives element not interactable error.
In case this element is not always appearing you should use logic clicking element only in case the element appeared.
With the correct, unique locator you code can be something like this:
submit_btn = WebDriverWait(driver,5).until(EC.element_to_be_clickable((By.CSS_SELECTOR,"[aria-label='No, I am not a California resident']")))
submit_btn.click()
time.sleep(5)
url = driver.current_url
submit_btn = driver.find_elements_by_xpath('button[aria-label="Close"]')
if(submit_btn):
submit_btn[0].click()
Here I'm using find_elements_by_xpath, this returns a list of web elements. In case element was not found it will be an empty list, it is interpreted as Boolean False in Python.
In case the element is found we will click on the first element in the returned list which will obviously the desired element.
UPD
See the correct locator used here.
It can also be in XPath syntax as //button[#aria-label="Close"]
I am trying to create a twitch follower bot, and have created simple code to go to the website and press the follow button, but it is not clicking the follow button.
import webbrowser
url = "https://twitch.tv/owlcrogs"
driver = webbrowser.open(url)
follow_button =
driver.find_element_by_xpath(get_page_element("follow_button"))
follow_button.click()
Where is defined get_page_element("follow_button")?
You must be sure It returns a valide xpath.
From Google Chrome you can inspect xpath with right click over the target element and select inspect. Then the developers tools are deployed. Over the selected item in the developers tools do [ right click >> copy >> copy Xpath ]
E.g.
driver.find_element_by_xpath('//*[#id="id_element"]/div[2]/a/span').click()
If if doesn't something is wrong with get_page_element. What is the type of error returned?
I have just checked the webPage and maybe you must put 'follow-button' instead of 'follow_button' with an hyphen instead of an underscore. However, I hope get_page_element search by data-a-target attribute, xD.
Here is a example to do that:
def find_element_by_attribute(wrapper, attribute, selection=None, xpath = None):
if selection is not None:
element = list(filter(lambda x: x.get_attribute(attribute) is not None and \
x.get_attribute(attribute).find(selection) != -1,\
wrapper.find_elements_by_xpath('.//*' if xpath is None else xpath)))
else:
element = list(filter(lambda x: x.get_attribute(attribute) is not None,\
wrapper.find_elements_by_xpath('.//*' if xpath is None else xpath)))
return None if len(element) is 0 else element[0]
wrapper : Page elements where search the target element (e.g. a div with li tags).
attribute : Attribute used to select the target element.
selecction : String with text which is in target attribute.
xpath : Could be used to search in a subwrapper element.
You could save this code un a module e.g. auxiliars.py
So your code after defining this function should be something like this:
import webbrowser
from auxiliars import find_element_by_attribute
url = "https://twitch.tv/owlcrogs"
driver = webbrowser.open(url)
follow_button = find_element_by_attribute(driver, 'data-a-target', selection='follow-button')
follow_button.click()