Clicking Javascript links using selenium webdriver in Python - python

I've been working on automations for a few months now, and for a recent project, I came across a Javascript link with href format as "javascript:srcUp" and no "htm" at the end. I've tried multiple ways of identifying this element, including css_selector, xpath, link_text, execute_script, etc. But none of the methods seem to work. With the link_text method, I was able to identify the element, but the click command didn't really do anything. Was wondering if anyone here has faced this issue, and if someone has, it would be great if you could share a possible solution.
Thanks

Related

Find Element in Selenium (Python) not possible due to multiple html tags

Hi :) This is my first time here and I am new to programming.
I am currently trying to automate some work-steps, using Selenium.
There was no problem, mainly using the find_element(By.ID,'') function and clicking stuff.
But now I cannot find any element that comes after the second "html" tag on the site (see screenshot)
I tried to google this "multiple html" problem, but all I found was people saying it is not possible to have multiple html tags. I basically don't know anything about html, but this site seems to have more than one - there are actually three. And anything after the first one cannot be subject to the find_element function. Please help me with this confusion.
These "multiple html" are due to the i frames in the html code. Each iframe has its own html code. If the selector you are using is meant to find something inside one of these iframes you have to "move" your driver inside the iframe. You can find an example in this other question

Get Xpath for Element in Python

I've been researching this for two days now. There seems to be no simple way of doing this. I can find an element on a page by downloading the html with Selenium and passing it to BeautifulSoup, followed by a search via classes and strings. I want to click on this element after finding it, so I want to pass its Xpath to Selenium. I have no minimal working example, only pseudo code for what I'm hoping to do.
Why is there no function/library that lets me search through the html of a webpage, find an element, and then request it's Xpath? I can do this manually by inspecting the webpage and clicking 'copy Xpath'. I can't find any solutions to this on stackoverflow, so please don't tell me I haven't looked hard enough.
Pseudo-Code:
*parser is BeautifulSoup HTML object*
for box in parser.find_all('span', class_="icon-type-2"): # find all elements with particular icon
xpath = box.get_xpath()
I'm willing to change my code entirely, as long as I can locate a particular element, and extract it's Xpath. So any other ideas on entirely different libraries are welcome.

Python : Clicking first google result using Selenium

I've tried almost every method of trying to click on the first result of google search.
I was trying to find the element using almost every queue i've found on the net, unfurntantlly no one of them succeed.
Please is there anyone who think he knows how can i click on it or even just getting the href as a text and i'll nevigate there by myself ...
Thanks a lot !

Python Selenium Find Element

I'm searching for a tag in class.I tried many methods but I couldn't get the value.
see source code
The data I need is inside the "data-description".
How can i get the "data-description" ?
I Tried some method but didn't work
driver.find_element_by_name("data-description")
driver.find_element_by_css_selector("data-description")
I Solved this method:
icerisi = browser.find_elements_by_class_name('integratedService ')
for mycode in icerisi:
hizmetler.append(mycode.get_attribute("data-description"))
Thanks for your help.
I think css selector would work best here. "data-description" isn't an element, it's an attribute of an element. The css selector for an element with a given attribute would be:
[attribute]
Or, to be more specific, you could use:
[attribute="attribute value"]
Here's a good tip:
Most web browsers have a way of copying an elements Selector or XPATH. For example, in Safari if you view the source code then right-click on an element it will give you the option to copy it. Then select XPATH or Selector and in your code use driver.find_element_by_xpath() or driver.find_element_by_css_selector(). I am certain Google Chrome and Firefox have similar options.
This method is not always failsafe, as the XPATH can be very specific, meaning that slight changes to the website will cause your script to crash, but it is a quick and easy solution, and is especially useful if you don't plan on reusing your code months or years later.

Selenium and Python 3 – unable to find element

First off, apologies for a commonly asked question. I've looked through all the earlier examples but none of the answers seem to work in my situation.
I'm trying to locate the username and password fields from this website: http://epaper.bt.com.bn/
I've had no problems locating the "myprofile" element and clicking on it. It then loads a page into an iframe. Here's my problem. I've tried all the various methods like find_element_by_id('input_username'), find_element_by_name('username') etc and they all do not work. Would appreciate if someone could point me down the right path.
Try first: (you should switch to iframe)
driver.switch_to.frame("iframe_login")
then you can find your elements. For example:
driver.find_element_by_id("input_username").send_keys("username")
for moving out of iframe:
driver.switch_to.default_content()

Categories