Selenium - 'site' object has no attribute 'find_element_by_link_text' - python

I'm trying to write a python script that clicks a certain link in a table on a webpage. The only option I have to select this particular link is it's link text, but selenium keeps telling me that the command "find_element_by_link_text" doesn't exist even though it's found on not only the official selenium docs but also multiple online selenium examples. Here's the code snippet:
hac.find_element_by_link_text("View this year's Report Cards").click()
I cross-checked my selenium installation with one from the website and they seem to be the same. Was this feature deprecated or am I just missing something? I'm using selenium v.2.45.0 and python v.2.7.

You need to call the find_element_by_link_text() method using driver.
Here is a sample script that opens the Python home page, locates the link to the About page using its link text, and then clicks that link:
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("http://www.python.org")
driver.implicitly_wait(10)
elem = driver.find_element_by_link_text("About")
driver.implicitly_wait(10)
elem.click()
This page of the Selenium docs gives an overview of all of the find_element methods available, and shows how to call those methods.

If you are using python selenium 4.3 some methods are deprecated like
find_element_*() and find_elements_*()
Examples:
find_element("name","element_name")
find_element("xpath","xpath_here")

Related

python selenium Unable to locate element for Google map term and service

I am trying to automate the search for the short google link via this code:
link = 'https://www.google.com/maps/place/Sport+La+Pava/#41.273359299999996,2.0005245,14z/data=!4m8!1m2!2m1!1sSport+La+Pava!3m4!1s0x12a49d5b3d4b1753:0xeb7e41655fa9ec91!8m2!3d41.273359299999996!4d2.0005245'
from selenium import webdriver
CHROME_DRIVER_PATH = "D:\chromedriver\chromedriver.exe"
driver = webdriver.Chrome(executable_path=CHROME_DRIVER_PATH)
driver.get(link)
time.sleep(3)
button1 = driver.find_element_by_id("introAgreeButton")
button1.click()
new_https = driver.find_element_by_xpath('/html/body/jsl/div[3]/div[2]/div/div[2]/div/div[3]/div/div/div[1]/div[4]/div[2]/div[1]/input').value_of_css_property()
print(new_https)
the link is a google map link.
The error happen at button1 = driver.find_element_by_id("introAgreeButton"). The button I am trying to get through here is basically the term and condition. I have to accept it. but everytime I receive the error NoSuchElementException
I have tried different method: using Xpath, full Xpath, css, nothing work.
I use the same code for website like amazon.com all work fine there, so it is not about the location of my webdriver or anything like that. it seems quite Google term and condition specific
As #Nick pointed out, this question has been answered (code was in javascript) Here is the code in Python for thoe who need it:
driver.switch_to.frame(0)
driver.find_element_by_id("introAgreeButton").click()

Unable to locate send button in whatsapp api

I am trying to send a message through web whatsapp using python selenium.
here is my code.
from selenium import webdriver
import time
browser=webdriver.Chrome()
browser.get("""https://api.whatsapp.com/send?phone=************&text=I'm%20interested%20in%20your%20car%20for%20sale""")
time.sleep(5)
send_btn=browser.find_element_by_id("action-button")
send_btn.click()
It is not clicking the send button, it just blinks. please help.
As you have mentioned, you are using XPATH , I would suggest you to use CSS_SELECTOR over XPATH.
This is the code you can try out :
send_button = driver.find_element_by_css_selector('a.button.button--simple.button--primary')
send_button.click()
UPDATE :
CSS selectors perform far better than Xpath and it is well documented in Selenium community. Here are some reasons,
Xpath engines are different in each browser, hence make them inconsistent.
IE does not have a native xpath engine, therefore selenium injects its own xpath engine for compatibility of its API. Hence we lose the advantage of using native browser features that WebDriver inherently promotes.
For more you refer this SO Link : XPATH VS CSS_SELECTOR

Setting page load timeout in Selenium Python binding

I am writing a bot using Python with Selenium module.When I open a webpage with my bot, since the webpage contains too many external sources than dom, it takes a lot to get all of the page loaded. I used the explicit and implicit waits to eliminate this problem since I just wanted a specific element to be loaded and not all of the webpage, it didn't work. The problem is If i run the following statement:
driver = webdriver.Firefox()
driver.get('somewebpage')
elm = WebDriverWait(driver, 5).until(ExpectedConditions.presence_of_element_located((By.ID, 'someelementID'))
elm.click()
It doesn't work since the Selenium waits for the driver.get() to fully retrieve the webpage and then, it proceeds further. Now I want to write a code that sets a timeout for driver.get(), Like:
driver.get('somewebpage').timeout(5)
Where the driver.get() stops loading the page after 5 secs and the program flow proceeds, whether the driver.get() fully loaded webpage or not.
By the way, I have searched about the feature that I said above, and came across there:
Selenium WebDriver go to page without waiting for page load
But the problem is that the answer in the above link does not say anything about the Python equivalent code.
How do I accomplish the future that I am searching for?
python equivalent code for the question mentioned in the current question (Selenium WebDriver go to page without waiting for page load):
from selenium import webdriver
profile = webdriver.FirefoxProfile()
profile.set_preference('webdriver.load.strategy', 'unstable')
driver = webdriver.Firefox(profile)
and:
driver.set_page_load_timeout(5)
There is a ton of questions on this, here is an example. Here is an example that waits until all jquery ajax calls have completed or a 5 second timeout.
from selenium.webdriver.support.ui import WebDriverWait
WebDriverWait(driver, 5).until(lambda s: s.execute_script("return jQuery.active == 0"))
It was a really tedious issue to solve. I just did the following and the problem got resolved:
driver= webdriver.Firefox()
driver.set_page_load_timeout(5)
driver.get('somewebpage')
It worked for me using Firefox driver (and Chrome driver as well).

How to simulate a AJAX call (XHR) with python and mechanize

I am working on a project that does online homework automatically.
I am able to login, finding exercises and even filling the form using mechanize.
I discovered that the submit button trigger a javascript function and I searched for the solution. A lot of answers consist of 'simulating the XHR'. But none of them talked about the details.
I don't know if this screen cap helps.
http://i.stack.imgur.com/0g83g.png
Thanks
If you want to evaluate javascript, I'd recommend using Selenium. It will open a browser which you can then send text to it from python.
First, install Selenium: https://pypi.python.org/pypi/selenium
Then download the chrome driver from here: https://code.google.com/p/chromedriver/downloads/list
Put the binary in the same folder as the python script you're writing. (Or add it to the path or whatever, more information here: https://code.google.com/p/selenium/wiki/ChromeDriver)
Afterwards the following example should work:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome()
driver.get("http://www.python.org")
assert "Python" in driver.title
elem = driver.find_element_by_name("q")
elem.send_keys("selenium")
elem.send_keys(Keys.RETURN)
assert "Google" in driver.title
driver.close()
More information here
(The example was also from there)
A xhr is the same as a regular request. Make it the same way and then deal with the response.

Python - Automating form entry on a .aspx website and storing output in a file (using Selenium?)

I've just started to learn coding this month and started with Python. I would like to automate a simple task (my first project) - visit a company's career website, retrieve all the jobs posted for the day and store them in a file. So this is what I would like to do, in sequence:
Go to http://www.nov.com/careers/jobsearch.aspx
Select the option - 25 Jobs per page
Select the date option - Today
Click on Search for Jobs
Store results in a file (just the job titles)
I looked around and found that Selenium is the best way to go about handling .aspx pages.
I have done steps 1-4 using Selenium. However, there are two issues:
I do not want the browser opening up. I just need the output saved to a file.
Even if I am ok with the browser popping up, using the Python code (exported from Selenium as Web Driver) on IDLE (i have windows OS) results in errors. When I run the Python code, the browser opens up and the link is loaded. But none of the form selections happen and I get the foll error message (link below), before the browser closes. So what does the error message mean?
http://i.stack.imgur.com/lmcDz.png
Any help/guidance will be appreciated...Thanks!
First about the error you've got, I should say that according to the expression NoSuchElementException and the message Unable to locate element, the selector you provided for the web-driver is wrong and web-driver can't find the element.
Well, since you did not post your code and I can't open the link of the website you entered, I can just give you a sample code and I will count as much details as I can.
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("url")
number_option = driver.find_element_by_id("id_for_25_option_indicator")
number_option.click()
date_option = driver.find_element_by_id("id_for_today_option_indicator")
date_option.click()
search_button = driver.find_element_by_id("id_for_search_button")
search_button.click()
all_results = driver.find_elements_by_xpath("some_xpath_that_is_common_between_all_job_results")
result_file = open("result_file.txt", "w")
for result in all_results:
result_file.write(result.text + "\n")
driver.close()
result_file.close()
Since you said you just started to learn coding recently, I think I have to give some explanations:
I recommend you to use driver.find_element_by_id in all cases that elements have ID property. It's more robust.
Instead of result.text, you can use result.get_attribute("value") or result.get_attribute("innerHTML").
That's all came into my mind by now; but it's better if you post your code and we see what is wrong with that. Additionally, it would be great if you gave me a new link to the website, so I can add more details to the code; your current link is broken.
Concerning the first issue, you can simply use a headless browser. This is possible with Chrome as well as Firefox.
Check Grey Li's answer here for example: Python - Firefox Headless
from selenium import webdriver
options = webdriver.FirefoxOptions()
options.add_argument('headless')
driver = webdriver.Firefox(options=options)

Categories