Python Selenium Not Able to Grab XPATH - python

Recently the login for this site changed and no longer recognizes my Python bot. Specifically, the issue appears to be occurring on the login page where it is unable to select the username input textbox. The id for it is 'loginId' and the correct XPATH appears to be "//*[#name='loginId']"
The line I am attempting to use (that used to work) is:
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "//*[#name='loginId']"))).send_keys(username)
The error message I am receiving clearly states it is not finding the element and is timing out:
File "C:\Users\Matt\Python3.9\lib\site-packages\selenium\webdriver\support\wait.py", line 80, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
I have tried all of the suggested names/paths/ids found in both Katalon Recorder and Selenium IDE. This does not appear to be an iFrame. Not sure what is going on.
Any thoughts or input would be helpful here. Link is provided up above if you can check it out that would be helpful. Thank you in advance!
Supplemental code [EDIT 07/05/2022]:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.chrome.options import Options
options = webdriver.ChromeOptions()
driver = webdriver.Chrome('C:\\Users\\Matt\\Documents\\Splitt\\Chromedriver\\chromedriver.exe', options=options)
driver.get("https://www.stellarmls.com/")
driver.maximize_window()
WebDriverWait(driver, 6).until(EC.element_to_be_clickable((By.XPATH, '//*[#id="login-form-row"]/form/div/div[3]/div/a'))).click()
driver.find_element(By.ID, "loginId").send_keys('username')

I didn't use your code and instead plugged in your site to a Selenium script I had and it worked without issue:
driver.find_element(By.NAME, "loginId").send_keys(username)
It essentially does the same thing without the explicit wait. The fact that it was missing didn't seem to be a problem.

Related

NoSuchElementException: Message: no such element: Unable to locate element: Error trying to scrape data from dynamic website using selenium

Ive been working on the following code in which i try to scrape data from a dynamic website(constantly live changing herokuapp website which displays data taken from a sensor).
While looking for some information i try to use selenium but i doesnt seem to work for me. The following part of the code works well:
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
driver = webdriver.Chrome('chromedriver', chrome_options=chrome_options)
driver.get("https://lambda-app-eia.herokuapp.com/")
Now the error appears right here
element = driver.find_element(By.CLASS_NAME, "MuiTypography-root MuiTypography-h4 css-2voflx")
I dont really have experience working with these types of libraries so any help would be highly appreciated.
I am using google colab btw
You are using the class name MuiTypography-root MuiTypography-h4 css-2voflx which has multiple spaces in it,however in Selenium spaces are not allowed in the class name, you can convert this to CSS selector like below:
element = driver.find_element(By.CSS_SELECTOR, ".MuiTypography-root.MuiTypography-h4.css-2voflx")
Always check in the dev tool whether the locator is unique or not.
Hope this helps.

Find xpath or something similar (=identifier) on web page

I am trying to click on a place on a video. I tried it with xpath already, but without success.
For example on this tiktok video: https://www.tiktok.com/#willsmith/video/7125844820328926510?is_from_webapp=v1&item_id=7125844820328926510&web_id=7139992072584676869
I'm trying to click on the heart with selenium (python).
That's my code:
if driver.find_element_by_xpath("/html/body/div[2]/div[2]/div[2]/div[1]/div[3]/div[1]/div[1]/div[3]/button[1]/span/div/svg/g/path") :
driver.find_element_by_xpath("/html/body/div[2]/div[2]/div[2]/div[1]/div[3]/div[1]/div[1]/div[3]/button[1]/span/div/svg/g/path").click()
It says that it's "Unable to locate element". I don't know why. I even added some sleep to the code because I thought that the website didn't load up fully or even tried with a different xpath.
I also tried to do it with the ID of the "heart-location" but the ID is very hard to understand if I inspect element.
Could someone please help me out? Thanks in advance!
You need to use the correct locator
And to wait for the element to be clickable.
For the former WebDriverWait Expected Conditions explicit wait should be used.
The below code works:
(In case you are already logged in)
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(service=webdriver_service, options=options)
url = "https://www.tiktok.com/#willsmith/video/7125844820328926510?is_from_webapp=v1&item_id=7125844820328926510&web_id=7139992072584676869"
driver.get(url)
wait = WebDriverWait(driver, 10)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "span[data-e2e='like-icon']"))).click()
In case you want to use XPath instead of CSS Selector just change the line above with
wait.until(EC.element_to_be_clickable((By.XPATH, "//span[#data-e2e='like-icon']"))).click()

selenium find_element_by_css_selector for the long class name

I have tried multiple times with other instruction codes with space from the tutorial which worked fine. However, when I just changed the URL and the following class, it would give the error saying
selenium.common.exceptions.InvalidSelectorException: Message: invalid selector: An invalid or illegal selector was specified
(Session info: chrome=100.0.4896.88)
Everything worked when I used the tutorial code.
Here is my code (I have solved a few chrome driver problems from the internet)
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
options = webdriver.ChromeOptions()
options.add_experimental_option("excludeSwitches", ["enable-logging"])
driver = webdriver.Chrome(options=options)
driver.get("https://raritysniper.com/nft-drops-calendar")
time.sleep(1)
link = driver.find_element_by_css_selector(".w-full.h-full.align-middle.object-cover.dark:brightness-80.dark:contrast-103.svelte-f3nlpp").get_attribute("alt")
print(link)
I am trying to get attributes of each projects and make them into csv.
(Please refer to the screenshot)
Screen shot of HTML that I am trying to extract
it would be wonderful if anyone could depict the problem I got with the code.
Thank you!
The CSS_SELECTOR that you are using
.w-full.h-full.align-middle.object-cover.dark:brightness-80.dark:contrast-103.svelte-f3nlpp
does not really match any element in the HTML.
Instead, you should use this CSS_SELECTOR:
div.w-full.h-full.align-middle img:not(.placeholder)
In code:
driver.maximize_window()
wait = WebDriverWait(driver, 30)
driver.get("https://raritysniper.com/nft-drops-calendar")
#time.sleep(1)
first_element = wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, "div.w-full.h-full.align-middle img:not(.placeholder)")))
print(first_element.get_attribute('alt'))
print(first_element.get_attribute('src'))
Import:
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
Output:
NEON PLEXUS
https://media.raritysniper.com/featured/neon-plexus_1648840196045_3.webp
Process finished with exit code 0

Why am I getting the message that the search-bar is not intractable?

The code is supposed to type "fish" into the YouTube search bar using Selenium and a Chrome Browser.
I have tried the xpaths of mulitple divs that hold the tag and they didn't work either.(not sure if the error was the same though) The xpath in the code is for the <input> tag so it should be fine.
I also watched a tutorial and the xpath was exactly the same so that shouldn't be the problem since it worked for the YouTuber.
It also took me some time to figure out that the find_element_by_* are depreciated functions.
Could it be that the .send_keys has also been changed? I did try to find the selenium changes in 4.1.0 and it said nothing about it on a website that I found.
Should I maybe delete Selenium 4.1.0 and install an older version? For simplicity sake. Since there is probably a bigger number of tutorials for it.
from selenium import webdriver
from selenium.webdriver.common.by import By
setting = webdriver.ChromeOptions()
setting.add_argument("--incognito")
# I open the browser in incognito just so I don't clutter my search
# history with dumb stuff as I'm testing things out
# could it be a part of the problem?
driver = webdriver.Chrome(options = setting)
driver.get('http://youtube.com')
searchbox = driver.find_element(By.XPATH, '//*[#id="search"]')
searchbox.send_keys('fish')
Error Message:
selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable
wait=WebDriverWait(driver,60)
driver.get('http://youtube.com')
searchbox = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,"input#search")))
searchbox.send_keys('fish')
In order to send_keys to that element wait for it to interactable and then send keys.
Imports:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
Outputs:

Unable to log in to Target.com using Selenium in Chrome WebDriver

I am trying to log on to Target's website using Selenium in Python with the Chrome WebDriver..
When I am prompted to log in, I use the following code:
self.browser.find_element_by_name("password").send_keys(pw)
self.browser.find_element_by_id("login").submit()
After the field is submitted, I am presented with this error in the DOM:
DOM Error Alert
..and this in the console:
401 Error
Note:
I have tried logging in with Selenium on Instagram, and it works.. So I know it has something to do with the structure of Target's website. Has anyone run into this issue before?
Thanks!
So I originally tried solving this issue using Chrome, but could not figure out why the page would not precede after entering the login data. I thought maybe the page was protected by some bot software, but could not find any proof.
I decided to try Safari on my MAC, and actually had success. See the below code:
import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as ec
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys
driver = webdriver.Safari(executable_path='/usr/bin/safaridriver')
driver.get('https://www.target.com/')
action = ActionChains(driver)
driver.find_element(By.XPATH, '//*[#id="account"]').click()
WebDriverWait(driver, 30).until(ec.presence_of_element_located((By.XPATH, '//*[#id="accountNav-signIn"]')))
action.send_keys(Keys.ENTER)
action.perform()
WebDriverWait(driver, 10).until(ec.presence_of_element_located((By.XPATH, '//h2[#class="sc-hMqMXs sc-esjQYD eXTUDl"]')))
driver.find_element(By.ID, 'username').click()
driver.find_element(By.ID, 'username').send_keys('foo')
time.sleep(5)
driver.find_element(By.ID, 'password').click()
driver.find_element(By.ID, 'password').send_keys('bar')
time.sleep(5)
driver.find_element(By.XPATH, "//button[#id=\'login\']").send_keys(Keys.ENTER)
time.sleep(10)
driver.quit()
You will notice some time.sleeps which I am using to slow the program down (you can take these out).
I also tried on FireFox and Edge, but had the same problems as Chrome.
Conclusion, it seems there could be some sort of bot protection which is blocking you from using Chrome (also Edge and FireFox). Given these webdrivers are being detected as automated. Safari (I believe) does not get detected as such.
I would also suggest reading through the below post, it may offer more insight.
Website navigates to no-access page using ChromeDriver and Chrome through Selenium probably Bot Protected

Categories