Selenium find_element_with_link_text not working - python

I have a piece of code as follows
<a class="country" href="/es-hn">
Honduras
</a>
and I'm trying to assign to to a variable by doing
el = self.driver.find_element_by_link_text('Honduras')
However whenever I run it I get the following error:
NoSuchElementException: Message: u"Unable to find element with link text == Honduras"

I've seen link_text fuzz up when trying to find a link when it's in block formation like this. I think it has something to do with the tabulation:
<a class="country" href="/es-hn">
[ ]Honduras
</a>
It only seems to work consistently when in line like this:
<a class="country" href="/es-hn">Honduras</a>
Try this:
el = self.driver.find_element_by_css_selector("a.country[href$='es-hn']")

I agree with sircapslot, in this case partial link text would also work:
el = self.driver.find_element_by_partial_link_text('Honduras')

This may occur when selenium is trying to find the link while your application hasn't rendered it yet. So you need to make it wait until the link appears:
browser.implicitly_wait(10) # 10 seconds
el = self.driver.find_element_by_link_text('Honduras')
The implicitly_wait call makes the browser poll until the item is on the page and visible to be interacted with.

Related

Selenium Python finding elementS

I wrote a code that works perfectly in one website, but in another isn't working.
Basically the problem is finding elements.
Also, tried to find it by Relative XPATH and Absolute XPATH, and still it won't find it.
(On the problematic website)
I would like to minimilize sharing, if anything else would help tell me :)
working element:
<div id="areaAvail_0" class="areaAvail red button" style="top: 141.65px; left: 212.221px;" data-areaindex="0" data-areatype="ReservedSeating" data-areaid="f8c68849-c882-e911-80dd-984be16723b6" data-hasqtip="94">
</div>
problematic element:
<div id="areaAvail_0" class="areaAvail red button" style="top: 213.238px; left: 91.901px;" data-areaindex="0" data-areatype="GeneralAdmission" data-areaid="20d4c178-7539-ed11-83d1-e7ab999ef3a1" data-hasqtip="1" aria-describedby="qtip-1">
</div>
The code:
driver = webdriver.Chrome(service=serv_obj)
wait = WebDriverWait(driver, 300)
driver.get(url)
driver.maximize_window()
list1 = driver.find_elements(By.CSS_SELECTOR, "div.areaAvail[data-areaindex]")
wait.until((EC.element_to_be_clickable((By.CSS_SELECTOR, "div.areaAvail[data-areaindex]"))))
print(len(list1))
when i use the Element_to_be_clickable, it's just "stuck" in that line.
if i use time.sleep(), after the time passes it will print 0 (the problemtic website).
Any ideas or suggestions?
thanks in advance!
So the problem is solved.
It was indeed inside an iframe
in order to click the element inside the iframe, i used these lines:
wait.until((EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR,"iframe#myIframe"))))
wait.until((EC.element_to_be_clickable((By.CSS_SELECTOR, "div.areaAvail[data-areaindex]"))))
thanks everyone!

Get text from div using Selenium and Python

Situation
I'm using Selenium and Python to extract info from a page
Here is the div I want to extract from:
I want to extract the "Registre-se" and the "Login" text.
My code
from selenium import webdriver
url = 'https://www.bet365.com/#/AVR/B146/R^1'
driver = webdriver.Chrome()
driver.get(url.format(q=''))
elements = driver.find_elements_by_class_name('hm-MainHeaderRHSLoggedOutNarrow_Join ')
for e in elements:
print(e.text)
elements = driver.find_elements_by_class_name('hm-MainHeaderRHSLoggedOutNarrow_Login ')
for e in elements:
print(e.text)
Problem
My code don't send any output.
HTML
<div class="hm-MainHeaderRHSLoggedOutNarrow_Join ">Registre-se</div>
<div class="hm-MainHeaderRHSLoggedOutNarrow_Login " style="">Login</div>
By looking this HTML
<div class="hm-MainHeaderRHSLoggedOutNarrow_Join ">Registre-se</div>
<div class="hm-MainHeaderRHSLoggedOutNarrow_Login " style="">Login</div>
and your code, which looks okay to me, except that part you are using find_elements for a single web element.
and by reading this comment
The class name "hm-MainHeaderRHSLoggedOutMed_Login " only appear in
the inspect of the website, but not in the page source. What it's
supposed to do now?
It is clear that the element is in either iframe or shadow root.
Cause page_source does not look for iframe.
Please check if it is in iframe, then you'd have to switch to iframe first and then you can use the code that you have.
switch it like this :
driver.switch_to.frame(driver.find_element_by_xpath('xpath here'))

Webdriver Selenium with python

hello colleagues a question, how would I click a href="javascript:void(0)" I have been trying to understand the same question from the same forum but I do not try to understand it very well, I await your contributions
the xpath href => //[#id="course-link-_62332_1"] ,
the xpath h4 = //[#id="course-link-_62332_1"]/h4
here photo
enter image description here
You can do
driver.find_element_by_id('course-link-_62332_1').click()
href="javascript:void(0)" is used to make the browser stay on same page when clicked. It might be performing task/event which is defined in JavaScript/Jquery script or so.
Coming to your question you can click on href by this method.
element1 = self.driver.find_element_by_xpath('//[#id="course-link-_62332_1"]')
element2 = self.driver.find_element_by_xpath('//[#id="course-link-_62332_1"]/h4 ')
element1.click()
element2.click()

How to find and click an image link using the image's src (Selenium, Python)

I would like to click an image link and I need to be able to find it by its src, however it's still not working for some reason. Is this even possible? This is what I'm trying:
#Find item
item = WebDriverWait(driver, 100000).until(EC.presence_of_element_located((By.XPATH, "//img[#src=link]")))
#item = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, "//img[#alt='Bzs36xl9 xa']")))
item.click()
link = //assets.supremenewyork.com/170065/vi/BZS36xl9-xA.jpg in the above code. This matches the HTML from below.
The second locator works (finding image using alt), but I will only have the image source when the program actually runs.
HTML for the webpage:
<article>
<div class="inner-article">
<a style="height:81px;" href="/shop/accessories/h68lyxo2h/llhxzvydj">
<img width="81" height="81" src="//assets.supremenewyork.com/170065/vi/BZS36xl9-xA.jpg" alt="Bzs36xl9 xa">
</a>
</div>
</article>
I don't see why finding by alt would work and not src, is this possible? I saw another similar question which is where I got my solution but it didn't work for me. Thanks in advance.
EDIT
To find the link I have to parse through a website in JSON format, here's the code:
#Loads Supreme JSON website into an object
url = urllib2.urlopen('https://www.supremenewyork.com/mobile_stock.json')
obj = json.load(url)
items = obj["products_and_categories"]["Accessories"]
itm_name = "Sock"
index = 0;
for i in items:
if(itm_name in items[index]["name"]):
found_url = i["image_url"]
break
index += 1
str_link = str(found_url)
link = str_link.replace("ca","vi")
Use WebDriverWait and element_to_be_clickable.Try the following xpath.Hope this will work.
link ='//assets.supremenewyork.com/170065/vi/BZS36xl9-xA.jpg'
item = WebDriverWait(driver, 30).until(EC.element_to_be_clickable((By.XPATH, "//div[#class='inner-article']/a/img[#src='{}']".format(link))))
print(item.get_attribute('src'))
item.click()
item = WebDriverWait(driver, 100000).until(EC.presence_of_element_located((By.XPATH, "//img[#src=link]")))
Heres your problem, I can't believe it didn't jump out at me. You're asking the driver to find an element with a src of "link" NOT the variable link that you've defined earlier. Idk how to pass in variables into xpaths but i do know that you can use stringFormat to create the correct xpath string just before calling it.
i also dont speak python, so here's some pseudo java/c# to help you get the picture
String xPathString = String.Format("//img[#src='{0}']", link);
item = WebDriverWait(driver, 100000).until(EC.presence_of_element_located((By.XPATH, xPathString)))

Python 3.5 + Selenium Scrape. Is there anyway to select <a><a/> tags?

So I'm very new to python and selenium. I'm writting an scraper to take some balances and download a txt file. So far I've managed to grab the account balances but downloading the txt files have proven to be a difficult task.
This is a sample of the html
<td>
<div id="expoDato_msdd" class="dd noImprimible" style="width: 135px">
<div id="expoDato_title123" class="ddTitle">
<span id="expoDato_arrow" class="arrow" style="background-position: 0pt 0pt"></span>
<span id="expoDato_titletext" class="textTitle">Exportar Datos</span>
</div>
<div id="expoDato_child" class="ddChild" style="width: 133px; z-index: 50">
<a class="enabled" href="/CCOLEmpresasCartolaHistoricaWEB/exportarDatos.do;jsessionid=9817239879882871987129837882222R?tipoExportacion=txt">txt</a>
<a class="enabled" href="/CCOLEmpresasCartolaHistoricaWEB/exportarDatos.do;jsessionid=9817239879882871987129837882222R?tipoExportacion=pdf">PDF</a>
<a class="enabled" href="/CCOLEmpresasCartolaHistoricaWEB/exportarDatos.do;jsessionid=9817239879882871987129837882222R?tipoExportacion=excel">Excel</a>
<a class="modal" href="#info_formatos">InformaciĆ³n Formatos</a>
</div>
</div>
I need to click on the fisrt "a" class=enabled. But i just can't manage to get there by xpath, class or whatever really. Here is the last thing i tried.
#Descarga de Archivos
ddmenu2 = driver.find_element_by_id("expoDato_child")
ddmenu2.find_element_by_css_selector("txt").click()
This is more of the stuff i've already tryed
#TXT = driver.select
#TXT.send_keys(Keys.RETURN)
#ddmenu2 = driver.find_element_by_xpath("/html/body/div[1]/div[1]/div/div/form/table/tbody/tr[2]/td/div[2]/table/tbody/tr/td[4]/div/div[2]")
#Descarga = ddmenu2.find_element_by_visible_text("txt")
#Descarga.send_keys(Keys.RETURN)
Please i would apreciate your help.
Ps:English is not my native language, so i'm sorry for any confusion.
EDIT:
This was the approach that worked, I'll try your other suggetions to make a more neat code. Also it will only work if the mouse pointer is over the browser windows, it doesn't matter where.
ddmenu2a = driver.find_element_by_xpath("/html/body/div[1]/div[1]/div/div/form/table/tbody/tr[2]/td/div[2]/table/tbody/tr/td[4]/div/div[1]").click()
ddmenu2b = driver.find_element_by_xpath("/html/body/div[1]/div[1]/div/div/form/table/tbody/tr[2]/td/div[2]/table/tbody/tr/td[4]/div/div[2]")
ddmenu2c = driver.find_element_by_xpath("/html/body/div[1]/div[1]/div/div/form/table/tbody/tr[2]/td/div[2]/table/tbody/tr/td[4]/div/div[2]/a[1]").click()
Pretty much brute force, but im getting to like python scripting.
Or simply use CSS to match on the href:
driver.find_element_by_css_selector("div#expoDato_child a.enabled[href*='txt']")
You can get all anchor elements like this:
a_list = driver.find_elements_by_tag_name('a')
this will return a list of elements. you can click on each element:
for a in a_list:
a.click()
driver.back()
or try xpath for each anchor element:
a1 = driver.find_element_by_xpath('//a[#class="enabled"][1]')
a2 = driver.find_element_by_xpath('//a[#class="enabled"][2]')
a3 = driver.find_element_by_xpath('//a[#class="enabled"][3]')
Please let me know if this was helpful
you can directly reach the elements by xpath via text:
driver.find_element_by_xpath("//*[#id='expoDato_child' and contains(., 'txt')]").click()
driver.find_element_by_xpath("//*[#id='expoDato_child' and contains(., 'PDF')]").click()
...
If there is a public link for the page in question that would be helpful.
However, generally, I can think of two methods for this:
If you can discover the direct link you can extract the link text and use pythons' urllib and download the file directly.
or
Use use Seleniums' click function and have it click on the link in the page.
A quick search resulted thusly:
downloading-file-using-selenium

Categories