Python - Selenium xpath - python

I'm wondering why the code works sometimes and sometimes not. My IDE gives me this debugging error:
Message: no such element: Unable to locate element:
{"method":"xpath","selector":"/html/body/div[4]/div/div/div[2]"}
(Session info: chrome=90.0.4430.93)
def find_followers(self):
self.driver.get(URL+ACCOUNT)
follow = self.driver.find_element_by_xpath('/html/body/div[1]/section/main/div/header/section/ul/li[3]/a')
follow.click()
time.sleep(10)
modal = self.driver.find_element_by_xpath('/html/body/div[6]/div/div/div[2]')
for i in range(10):
self.driver.execute_script('arguments[0].scrollTop = arguments[0].scrollHeight', modal)
time.sleep(13)
I'm trying to make a script which goes to Instagram and opening up a Instagram accounts followers. The script goes well till this error. I've checked the XPath and it is surely right. I tried the script for a few days and it was working, but now when I tried again it dosen't. I'm new with Python and want to learn why this happen and how to solve it.

You need to close the cookie popup and login to instagram in order to see the followers list. I'd recommend creating a second function called "initiate_instagram" that does that.
You could also login manually, because of 2 factor authentication.

Related

selenium-driver :looking a solution for locating and clicking a button when i open google page

python selenium driver:
No thanks
when i open a google page the, there is a small window asking if i want to sign in or not. i want to click on "No thanks" button, which is as shown above.
i have tried these methods so far, but i keep getting errors. None of the following is working.
#self.driver.find_element(By.CSS_SELECTOR, 'button.M6CB1c')
#button=self.driver.find_elements(By.XPATH, '//button')
#abc=self.driver.find_elements(By.NAME, 'ZUkOIc').click()
#self.driver.find_element(By.TAG_NAME, 'button').click()
error message for the 1st line of code:
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".M6CB1c.rr4y5c"}
selenium.common.exceptions.NoSuchElement is caused when the element is not in the page at the current time.
TLDR;
What you're looking for is Explicit Wait in selenium. You need to use WebDriverWait with expected condition element_to_be_clickable.
When we load a page, modern pages tend to load javascript that can often manipulate DOM (html page objects). The proper way to handle this is to wait for the page or the required element to load, and then try to locate it.
The selenium waits section explains this very well with an example.
You should try this :
driver.find_element(By.XPATH, '//button[#id="W0wltc"])

How to click the Continue button on website using selenium in python?

I am trying to write a code that is able to auto apply on job openings on indeed.com. I have managed to reach the last stage, however, the final click on the application form is giving me a lot of trouble. Please refer the page as below
Once logged in to my profile, I go to the relevant search page, click on the listing I am interested in and then on the final page (shown above) I am trying to click on the continue button using xpath as follows:
driver.get("https://in.indeed.com/jobs?q=data%20analyst&l=Delhi&vjk=5c0bd416675cf4e5")
driver.find_element_by_xpath('//*[#id="apply-button-container"]/div[1]/span[1]').click()
driver.find_element_by_xpath('//*[#id="form-action-continue"]')
However, this gives me an error:
Message: no such element: Unable to locate element: {"method":"xpath","selector":"//*[#id="form-action-continue"]"}
Having gone through some suggestions on the net I have even tried the following:
driver.get("https://in.indeed.com/jobs?q=data%20analyst&l=Delhi&vjk=5c0bd416675cf4e5")
driver.find_element_by_xpath('//*[#id="apply-button-container"]/div[1]/span[1]').click()
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, '//*[#id="form-action-continue"]')))
But then this gives me a timeout error
TimeoutException: Message:
Will appreciate some help on this.
From it seems, on that form there are multiple iframes, therefore the reason for your errors.
You need to get the first iframe, switch to it, get the second iframe inside the first one, switch to it and only afterwards you'll be able to get the continue button.
Something like this should do the trick:
frame_1 = driver.find_element_by_css_selector('iframe[title="Job application form container"')
driver.switch_to.frame(frame_1)
frame_2 = driver.find_element_by_css_selector('iframe[title="Job application form"]')
driver.switch_to.frame(frame_2)
continue_btn = driver.find_element_by_css_selector('#form-action-continue')
continue_btn.click()
Once I had a similar issue and standard time.sleep() until the form (or continue button) appears helped me. Try it instead of WebDriverWait, maybe it will work.

Selenium with Python 3.5 and Chrome - failed to click element located via xpath

I have a script which gets me info from Polish Avon's website. So essentially every month they change prices, and to make my girl's life easier I just download the prices to have a look up table in excel.
Anyways, so I have the script which navigates to this website:
https://www.avon.pl/szukaj/po-kodzie-produktu/
Once the page is loaded it enters a number between 00000 and 99999 into the search box, which I find using xpath:
find_box_path = '//*[#id="ShopByProductNumber"]/div[2]/div[3]/div[1]/div/input'
Only some of the codes are valid, so if the search is successful, the script will click on the item, which opens in a new window and processes the information, if nothing is found it moves on to the next number. The script checks for the xpath to figure out if the code is valid or not. The following exctract would click on the element and open a new tab:
# ------------ click the product ------------
find_item_text_element = WebDriverWait(driver, 10).until(lambda driver: driver.find_element_by_xpath(find_text_path))
find_item_text_element.click()
time.sleep(0.4)
The find_text_path variable is declared earlier as:
find_text_path = '//*[#id="ShopByProductNumber"]/div[2]/div[3]/p/a'
The interesting bit, is that for more than a year, my script worked like a charm. Only 2 days ago the script was running, I got through maybe 25000 combinations until it stooped. From that point, when the script gets to the bit above, it shuts down and resets. I understand why it resets - that is intended - but I have no idea why it won't click on the element. The IDE doesn't show any error.
I use XPather to find the xpaths, and the one above is a valid xpath. And as I said it work fine until now. I understand that website itself could have done something to prevent automation, but I don't see the problem. Can anyone see/point out the problem? Maybe some workarounds?
Location of both elements in question
EDIT:
The issue was resolved. The zoom in my chrome profile I was using, was set to 105% instead of 100%. This cause the webdriver to click wrong spot on the page.
Can you show us the specific html code which is causing the issue? I'm not able to find it on the website. Maybe the element you're trying to reach is now included in a frame, but I can only suppose without the html.

Python - Automating form entry on a .aspx website and storing output in a file (using Selenium?)

I've just started to learn coding this month and started with Python. I would like to automate a simple task (my first project) - visit a company's career website, retrieve all the jobs posted for the day and store them in a file. So this is what I would like to do, in sequence:
Go to http://www.nov.com/careers/jobsearch.aspx
Select the option - 25 Jobs per page
Select the date option - Today
Click on Search for Jobs
Store results in a file (just the job titles)
I looked around and found that Selenium is the best way to go about handling .aspx pages.
I have done steps 1-4 using Selenium. However, there are two issues:
I do not want the browser opening up. I just need the output saved to a file.
Even if I am ok with the browser popping up, using the Python code (exported from Selenium as Web Driver) on IDLE (i have windows OS) results in errors. When I run the Python code, the browser opens up and the link is loaded. But none of the form selections happen and I get the foll error message (link below), before the browser closes. So what does the error message mean?
http://i.stack.imgur.com/lmcDz.png
Any help/guidance will be appreciated...Thanks!
First about the error you've got, I should say that according to the expression NoSuchElementException and the message Unable to locate element, the selector you provided for the web-driver is wrong and web-driver can't find the element.
Well, since you did not post your code and I can't open the link of the website you entered, I can just give you a sample code and I will count as much details as I can.
from selenium import webdriver
driver = webdriver.Firefox()
driver.get("url")
number_option = driver.find_element_by_id("id_for_25_option_indicator")
number_option.click()
date_option = driver.find_element_by_id("id_for_today_option_indicator")
date_option.click()
search_button = driver.find_element_by_id("id_for_search_button")
search_button.click()
all_results = driver.find_elements_by_xpath("some_xpath_that_is_common_between_all_job_results")
result_file = open("result_file.txt", "w")
for result in all_results:
result_file.write(result.text + "\n")
driver.close()
result_file.close()
Since you said you just started to learn coding recently, I think I have to give some explanations:
I recommend you to use driver.find_element_by_id in all cases that elements have ID property. It's more robust.
Instead of result.text, you can use result.get_attribute("value") or result.get_attribute("innerHTML").
That's all came into my mind by now; but it's better if you post your code and we see what is wrong with that. Additionally, it would be great if you gave me a new link to the website, so I can add more details to the code; your current link is broken.
Concerning the first issue, you can simply use a headless browser. This is possible with Chrome as well as Firefox.
Check Grey Li's answer here for example: Python - Firefox Headless
from selenium import webdriver
options = webdriver.FirefoxOptions()
options.add_argument('headless')
driver = webdriver.Firefox(options=options)

selenium with python web crawler

I want to screen scrape a web site having multiple pages. These pages are loaded dynamically without changing the URL. Hence I'm using selenium to screen scrape it. But I'm getting an exception for this simple program.
import re
from contextlib import closing
from selenium.webdriver import Firefox
url="http://www.samsung.com/in/consumer/mobile-phone/mobile-phone/smartphone/"
with closing(Firefox()) as browser:
n = 2
link = browser.find_element_by_link_text(str(n))
link.click()
#web_page=browser.page_source
#print type(web_page)
Error is as follows
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: u'Unable to locate element: {"method":"link text","selector":"2"}' ; Stacktrace: Method FirefoxDriver.prototype.findElementInternal_ threw an error in file:///tmp/tmpMJeeTr/extensions/fxdriver#googlecode.com/components/driver_component.js
Is it the problem with the url given or with the firefox browser.
Would be great help if someone helped me.
I think your main issue is that the page itself takes a while to load, and you are immediately trying to access that link (which likely hasn't yet rendered, hence the stack trace). One thing you can try is using an Implicit Wait 1 with your browser, which will tell the browser to wait for a certain period of time for elements to appear before timing out. In your case, you could try the following, which would wait for up to 10 seconds while polling the DOM for a particular item (in this case, the link text 2):
browser.implicitly_wait(10)
n = 2
link = browser.find_element_by_link_text(str(n))
link.click()
#web_page=browser.page_source
#print type(web_page)
I'm developing a python module which might cover your (or another's) use case:
https://github.com/cmwslw/selenium-crawler
It converts recorded selenium scripts to crawling functions, thus avoiding writing any of the above code. It works great with pages that load content dynamically. I hope someone finds this useful.

Categories