cannot locate element within a web page using selenium python - python

I just want to write a simple log in code for one website. However, I think the log in page was written in JS. It's really hard to locate the elements with selenium.
The web page I am going to play with is:
"https://www.nike.com/snkrs/login?returnUrl=%2F"
This is how the page looks like and how the inspect element page look like:
I was trying to locate the element by following code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
driver = webdriver.Firefox()
driver.get("https://www.nike.com/snkrs/login?returnUrl=%2Fthread%2Fe9680e08e7e3cd76b8832684037a58a369cad5ed")
time.sleep(5)
driver.switch_to.frame(driver.find_element_by_tag_name("iframe"))
elem =driver.find_element_by_xpath("//*[#id='ce3feab5-6156-441a-970e-23544473a623']")
elem.send_keys("pycon")
elem.send_keys(Keys.RETURN)
driver.close()
This code return the error that the element could not find by [#id='ce3feab5-6156-441a-970e-23544473a623'.
I tried playing with frames, it seems does not work. If I went to "view web source" page, it is full of JS code.
Is there a good way to play with such a web page with selenium?

Try changing the code :
elem =driver.find_element_by_xpath("//*[#id='ce3feab5-6156-441a-970e-23544473a623']")
to
elem =driver.find_element_by_xpath("//*[#type='email']")

My guess (and observation) is that the id changes each time you visit the page. The id looks auto-generated, and when I go to the page multiple times, the id is different each time.
You'll need to search for something that doesn't change. For example, you can search for the name attribute, which has the seemingly static value "emailAddress"
element = driver.find_element_by_name("emailAddress")
You could also use an xpath expression to search for other attributes, such as data-componentname:
element = driver.find_element_by_xpath("//input[#data-componentname='emailAddress']")
Also, instead of a hard-coded sleep, you can simply wait for the element to be visible:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Firefox()
driver.get("https://www.nike.com/snkrs/login")
element = WebDriverWait(driver, 10).until(
EC.visibility_of_element_located((By.NAME, "emailAddress"))
)

Related

How do I use selenium ChromeDriver to scroll the sidebar on Google maps to load more results?

I’ve run into a problem trying to use Selenium ChromeDriver to scroll down the sidebar of a google maps results page. I am trying to get to the 6th result down but the result does not fully load until you scroll down. Using the find_element_by_xpath method, I am successfully able to access results 1-5 and click into them individually, but when trying to use the actions.move_to_element(link).perform() method to scroll to the 6th element, it does not work and throws an error message.
The error that I get is:
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element:
However, I know this element exists because when I manually scroll and more results are loaded, the Xpath works correctly. What am I doing wrong? I’ve spent many hours trying to solve this and I haven’t been able to solve with the available content out there. I appreciate any help or insights you can offer, thank you!
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup as soup
import time
PATH = "C:\Program Files (x86)\chromedriver.exe"
driver = webdriver.Chrome(PATH)
driver.get("https://www.google.com/maps")
time.sleep(7)
page = soup(driver.page_source, 'html.parser')
#find the searchbar, enter search, and hit return
search = driver.find_element_by_id('searchboxinput')
search.send_keys("dentists in Austin Texas")
search.send_keys(Keys.RETURN)
driver.maximize_window()
time.sleep(7)
#I want to get the 6th result down but it requires a sidebar scroll to load
link = driver.find_element_by_xpath("//*[#id='pane']/div/div[1]/div/div/div[4]/div[1]/div[13]/div/a")
actions.move_to_element(link).perform()
link.click()
time.sleep(5)
driver.back()```
I found a solution that works, it is to target the element in XPATH from the javascript interface of selenium. You must then execute two commands on an instruction (targeting and scroll)
driver.executeScript("var el = document.evaluate('/html/body/jsl/div[3]/div[10]/div[8]/div/div[1]/div/div/div[4]/div[1]', document, null, XPathResult.FIRST_ORDERED_NODE_TYPE, null).singleNodeValue; el.scroll(0, 5000);");
this is the only solution that worked for me
The search results in the google map are located with //div[contains(#aria-label,'dentists in Austin Texas')]//div[contains(#jsaction,'mouseover')] XPath.
So, to select 6-th element there you can do the following
from selenium.webdriver.common.action_chains import ActionChains
results = driver.find_elements_by_xpath('//div[contains(#aria-label,"dentists in Austin Texas")]//div[contains(#jsaction,"mouseover")]')
ActionChains(driver).move_to_element(results[6]).click(button).perform()
I was just implementing scrolling on google map sidebar, it's working on my side. check this code please
# selecting scroll body
driver.find_element_by_xpath('/html/body/div[3]/div[9]/div[9]/div/div/div[1]/div[2]/div/div[1]/div/div/div[2]/div[1]').click()
#start scrolling your sidebar
html = driver.find_element_by_xpath('/html/body/div[3]/div[9]/div[9]/div/div/div[1]/div[2]/div/div[1]/div/div/div[2]/div[1]')
html.send_keys(Keys.END)
also add the "KEYS" library
from selenium.webdriver.common.keys import Keys
I hope it would help you.
by the way I have implemented scrapping of google map with its available data and used above code to scroll. check if you have any problem, let me know then

How can I get this SPECIFIC element in selenium in python

UPDATE:
So thanks to the voted answer it displayed some information not the right information, it shows 0kb out of 100 and when in the inspect element console if doing console.log($0) then the item would be displayed in console how do I fetch this
I want to create a python 3.x programme that gets my stats off of netlify and easybase using selenium. The issue I have come across already is that the element does not have a specific class name and the text widget isn't just a tag nor a tag. Here is a screenshot of the html of netlify the screenshot, and this is the code that I used
element = driver.find_element_by_name("github")
element.click()
login = driver.find_element_by_name("login")
login.send_keys(email)
password = driver.find_element_by_name("password")
password.send_keys(passwordstr)
loginbtn = driver.find_element_by_name("commit")
loginbtn.click()
getbandwidth = driver.find_element_by_xpath('//*[#id="main"]/div/div[1]/div/section/div/div/div/dl/div/dd')
print(getbandwidth.text)
getbandwidth = driver.find_element_by_xpath("//dd[#class='tw-text-xl tw-mt-4px tw-leading-none']")
You can use this to grab the first xpath with that class. Below does the same but if you want to index other elements with similar classes.
(//dd[#class='tw-text-xl tw-mt-4px tw-leading-none'])[1]
Normally we use webdriver waits to allow for the element to become visible.
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
getbandwidth = WebDriverWait(driver, 10).until(EC.visibility_of_element_located((By.XPATH,"//dd[#class='tw-text-xl tw-mt-4px tw-leading-none']")))

Send keys to HTML element using Selenium

I am writing an automation script for sports betting in Python using Selenium. I am stuck at a point where Selenium is unable to click or send keys to the specific HTML element highlighted in the following screenshot (https://i.stack.imgur.com/NbljY.png).
Here is what I have tried:
from selenium import webdriver
driver = webdriver.Chrome()
driver.get("http://www.bet365.com")
### Some code here to navigate to a particular match
driver.switch_to.frame(bet_driver.find_element_by_tag_name("iframe"))
elem = driver.find_element_by_class_name("bs-Stake")
elem.click()
elem.send_keys("100")
This returns the following error:
ElementNotInteractableException: Element <div class="bs-Stake"> is not reachable by keyboard
If I try instead
elem = driver.find_element_by_class_name("stk bs-Stake_TextBox")
I get the error:
NoSuchElementException: Unable to locate element: .stk bs-Stake_TextBox
I would appreciate help navigating to the HTML element, clicking and sending keys to it, using any method available in Selenium.
Try to sendkeys with webdriver wait so element would be able to interact.
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
element = WebDriverWait(driver, 50).until(EC.visibility_of_element_located((CSS_SELECTOR, ".stk.bs-Stake_TextBox")))
element.send_keys("100")
Although CSS styles are inherited... the class designations are not. Your css selector has three separate classes in it, but your element sample only has one.
Selenium is designed to mimic the same actions that a person can perform... the error suggests that the field is not available for interaction.... find out why.

Access nested elements in HTML using Python Selenium

I am trying to automate logging into a website (http://www.phptravels.net/) using Selenium - Python on Chrome. This is an open website used for automation tutorials.
I am trying to click an element to open a drop-down (My Account button at the top navbar) which will then give me the option to login and redirect me to the login page.
The HTML is nested with many div and ul/li tags.
I have tried various lines of code but haven't been able to make much progress.
driver.find_element_by_id('li_myaccount').click()
driver.find_element_by_link_text(' Login').click()
driver.find_element_by_xpath("//*[#id='li_myaccount']/ul/li[1]/a").click()
These are some of the examples that I tried out. All of them failed with the error "element not visible".
How do I find those elements? Even the xpath function is throwing this error.
I have not added any time wait in my code.
Any ideas how to proceed further?
Hope this code will help:
from selenium import webdriver
from selenium.webdriver.common.by import By
url="http://www.phptravels.net/"
d=webdriver.Chrome()
d.maximize_window()
d.get(url)
d.find_element(By.LINK_TEXT,'MY ACCOUNT').click()
d.find_element(By.LINK_TEXT,'Login').click()
d.find_element(By.NAME,"username").send_keys("Test")
d.find_element(By.NAME,"password").send_keys("Test")
d.find_element(By.XPATH,"//button[text()='Login']").click()
Use the best available locator on your html page, so that you need not to create xpath of css for simple operations
You may be having issues with the page not being loaded when you try and find the element of interest. You should use the WebDriverWait class to wait until a given element is present in the page.
Adapted from the docs:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# Set up your driver here....
try:
element = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, 'li_myaccount'))
)
element.click()
except:
#Handle any exceptions here
finally:
driver.quit()

Python with selenium webscraping unable to find elements

I am trying to write a webscraping in python that will activate the "onclick" functionality of certain buttons on a webpage because the tables with the data I want are converted to csv, which makes it much easier to access. But the problem is that I am unable to locate elements by xpath at all when using PhantomJs. How can I click the element and access the csv content that I want?
This is my code:
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.common.proxy import *
url = "http://www.pro-football-reference.com/boxscores/201609180nwe.htm"
xpath = "//*[#id='all_player_offense']/div[1]/div/ul/li[1]/div/ul/li[3]/button"
path_to_phantomjs = 'browser/phantomjs'
browser = webdriver.PhantomJS(executable_path = path_to_phantomjs)
browser.get(url)
delay=3
element_present = EC.presence_of_element_located((By.ID, 'all_player_offense'))
WebDriverWait(browser, delay).until(element_present)
browser.find_element_by_xpath(xpath).click()
And I get this error:
selenium.common.exceptions.NoSuchElementException: Message: {"errorMessage":"Unable to find element with xpath '//*[#id='all_player_offense']/div[1]/div/ul/li[1]/div/ul/li[3]/button'","request":{"headers":{"Accept":"application/json","Accept-Encoding":"identity","Connection":"close","Content-Length":"153","Content-Type":"application/json;charset=UTF-8","Host":"127.0.0.1:50989","User-Agent":"Python-urllib/2.7"},"httpVersion":"1.1","method":"POST","post":"{\"using\": \"xpath\", \"sessionId\": \"93ff24f0-9cbe-11e6-8711-bdfa3ff9cfb1\", \"value\": \"//*[#id='all_player_offense']/div[1]/div/ul/li[1]/div/ul/li[3]/button\"}","url":"/element","urlParsed":{"anchor":"","query":"","file":"element","directory":"/","path":"/element","relative":"/element","port":"","host":"","password":"","user":"","userInfo":"","authority":"","protocol":"","source":"/element","queryKey":{},"chunks":["element"]},"urlOriginal":"/session/93ff24f0-9cbe-11e6-8711-bdfa3ff9cfb1/element"}}
Screenshot: available via screen
IMPORTANT THING I FORGOT TO MENTION: As described in this this issue on GitHub, try putting set_window_size(width, height) or maximize_window()after setting the webdriver. You should also consider telling the webdriver to implicitly_wait(10) for the element to appear.
So there's a special maneuver you have to perform in order for the Selenium Webdriver to properly emulate what you're doing. In essence, to get the desired data, you'd have to:
A: Hover over the "Share & More" dropdown menu. Then
B: Click "Get table as CSV (Excel)".
For A, this involves having to place the emulated cursor on the element without clicking it. This idea of "mouse over" can be done with the move_to_element() function provided in the ActionChains class. So at the top you'd insert this:
from selenium.webdriver.common.action_chains import ActionChains
You want Selenium to find the specific element and move to it. You can achieve this with 2 lines of code:
dropdown = browser.find_element_by_xpath('//*[#id="all_player_offense"]/div[1]/div/ul/li[1]')
ActionChains(browser).move_to_element(dropdown).perform()
If you omit the above, you'll get an ElementNotVisibleException.
Now for B, you should be able to do browser.find_element_by_xpath(xpath).click().

Categories